I0131 21:09:24.852979 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0131 21:09:24.853858 8 e2e.go:109] Starting e2e run "84d426e0-3c7f-49d7-9b96-379ccbf45ea2" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580504963 - Will randomize all specs Will run 278 of 4814 specs Jan 31 21:09:24.907: INFO: >>> kubeConfig: /root/.kube/config Jan 31 21:09:24.913: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 31 21:09:24.944: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 31 21:09:24.993: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 31 21:09:24.993: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 31 21:09:24.993: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 31 21:09:25.006: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 31 21:09:25.007: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 31 21:09:25.007: INFO: e2e test version: v1.17.0 Jan 31 21:09:25.012: INFO: kube-apiserver version: v1.17.0 Jan 31 21:09:25.012: INFO: >>> kubeConfig: /root/.kube/config Jan 31 21:09:25.034: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 31 21:09:25.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jan 31 21:09:25.275: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 31 21:09:25.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351" in namespace "downward-api-8827" to be "success or failure" Jan 31 21:09:25.364: INFO: Pod "downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351": Phase="Pending", Reason="", readiness=false. Elapsed: 18.773902ms Jan 31 21:09:27.378: INFO: Pod "downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031924545s Jan 31 21:09:29.387: INFO: Pod "downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041301245s Jan 31 21:09:31.394: INFO: Pod "downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048720032s Jan 31 21:09:33.614: INFO: Pod "downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.268701174s STEP: Saw pod success Jan 31 21:09:33.615: INFO: Pod "downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351" satisfied condition "success or failure" Jan 31 21:09:33.660: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351 container client-container: STEP: delete the pod Jan 31 21:09:33.769: INFO: Waiting for pod downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351 to disappear Jan 31 21:09:33.798: INFO: Pod downwardapi-volume-b0afd4c2-e95f-4ece-beb1-1c0aa8624351 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 31 21:09:33.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8827" for this suite. • [SLOW TEST:8.781 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":15,"failed":0} [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 31 21:09:33.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-18b4b609-f19f-48ac-8c96-e0cab510676d STEP: Creating a pod to test consume secrets Jan 31 21:09:34.136: INFO: Waiting up to 5m0s for pod "pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9" in namespace "secrets-2633" to be "success or failure" Jan 31 21:09:34.148: INFO: Pod "pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.210651ms Jan 31 21:09:36.154: INFO: Pod "pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017742521s Jan 31 21:09:38.167: INFO: Pod "pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030778143s Jan 31 21:09:40.173: INFO: Pod "pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036823585s Jan 31 21:09:42.181: INFO: Pod "pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045013799s STEP: Saw pod success Jan 31 21:09:42.182: INFO: Pod "pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9" satisfied condition "success or failure" Jan 31 21:09:42.185: INFO: Trying to get logs from node jerma-node pod pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9 container secret-volume-test: STEP: delete the pod Jan 31 21:09:42.220: INFO: Waiting for pod pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9 to disappear Jan 31 21:09:42.226: INFO: Pod pod-secrets-1a2aa842-058c-473b-9d4a-12cd09fb77d9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 31 21:09:42.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2633" for this suite. STEP: Destroying namespace "secret-namespace-9030" for this suite. • [SLOW TEST:8.428 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":15,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 31 21:09:42.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 31 21:09:42.371: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 31 21:09:42.379: INFO: Number of nodes with available pods: 0 Jan 31 21:09:42.379: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 31 21:09:42.454: INFO: Number of nodes with available pods: 0 Jan 31 21:09:42.454: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:43.467: INFO: Number of nodes with available pods: 0 Jan 31 21:09:43.467: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:44.462: INFO: Number of nodes with available pods: 0 Jan 31 21:09:44.462: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:45.460: INFO: Number of nodes with available pods: 0 Jan 31 21:09:45.461: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:46.463: INFO: Number of nodes with available pods: 0 Jan 31 21:09:46.463: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:47.460: INFO: Number of nodes with available pods: 0 Jan 31 21:09:47.460: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:48.467: INFO: Number of nodes with available pods: 0 Jan 31 21:09:48.468: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:49.463: INFO: Number of nodes with available pods: 1 Jan 31 21:09:49.463: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 31 21:09:49.541: INFO: Number of nodes with available pods: 1 Jan 31 21:09:49.541: INFO: Number of running nodes: 0, number of available pods: 1 Jan 31 21:09:50.552: INFO: Number of nodes with available pods: 0 Jan 31 21:09:50.552: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 31 21:09:50.573: INFO: Number of nodes with available pods: 0 Jan 31 21:09:50.573: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:51.585: INFO: Number of nodes with available pods: 0 Jan 31 21:09:51.585: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:52.586: INFO: Number of nodes with available pods: 0 Jan 31 21:09:52.586: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:53.580: INFO: Number of nodes with available pods: 0 Jan 31 21:09:53.580: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:54.585: INFO: Number of nodes with available pods: 0 Jan 31 21:09:54.585: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:55.580: INFO: Number of nodes with available pods: 0 Jan 31 21:09:55.580: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:56.581: INFO: Number of nodes with available pods: 0 Jan 31 21:09:56.581: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:57.585: INFO: Number of nodes with available pods: 0 Jan 31 21:09:57.585: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:58.581: INFO: Number of nodes with available pods: 0 Jan 31 21:09:58.581: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:09:59.581: INFO: Number of nodes with available pods: 0 Jan 31 21:09:59.581: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:10:00.585: INFO: Number of nodes with available pods: 0 Jan 31 21:10:00.585: INFO: Node jerma-node is running more than one daemon pod Jan 31 21:10:01.580: INFO: Number of nodes with available pods: 1 Jan 31 21:10:01.580: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-927, will wait for the garbage collector to delete the pods Jan 31 21:10:01.652: INFO: Deleting DaemonSet.extensions daemon-set took: 10.752621ms Jan 31 21:10:01.952: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.669125ms Jan 31 21:10:12.370: INFO: Number of nodes with available pods: 0 Jan 31 21:10:12.370: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 21:10:12.382: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-927/daemonsets","resourceVersion":"5592021"},"items":null} Jan 31 21:10:12.451: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-927/pods","resourceVersion":"5592022"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 31 21:10:12.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-927" for this suite. • [SLOW TEST:30.279 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":3,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 31 21:10:12.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 31 21:10:12.649: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/:
alternatives.log
apt/
... (200; 29.721496ms)
Jan 31 21:10:12.654: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.467614ms)
Jan 31 21:10:12.660: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 6.112404ms)
Jan 31 21:10:12.665: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.011476ms)
Jan 31 21:10:12.669: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.167224ms)
Jan 31 21:10:12.673: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.416943ms)
Jan 31 21:10:12.677: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.27497ms)
Jan 31 21:10:12.680: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.987404ms)
Jan 31 21:10:12.683: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.957964ms)
Jan 31 21:10:12.686: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.93919ms)
Jan 31 21:10:12.689: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.612528ms)
Jan 31 21:10:12.701: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 12.851248ms)
Jan 31 21:10:12.720: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 18.719937ms)
Jan 31 21:10:12.725: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.55435ms)
Jan 31 21:10:12.728: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.019913ms)
Jan 31 21:10:12.731: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.201859ms)
Jan 31 21:10:12.735: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.352944ms)
Jan 31 21:10:12.739: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.053111ms)
Jan 31 21:10:12.742: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.164434ms)
Jan 31 21:10:12.745: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.878981ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:10:12.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8527" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":4,"skipped":67,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:10:12.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 31 21:10:12.819: INFO: Waiting up to 5m0s for pod "pod-979205bd-4036-483c-91af-19f7eaf7deff" in namespace "emptydir-6196" to be "success or failure"
Jan 31 21:10:12.872: INFO: Pod "pod-979205bd-4036-483c-91af-19f7eaf7deff": Phase="Pending", Reason="", readiness=false. Elapsed: 52.756691ms
Jan 31 21:10:14.879: INFO: Pod "pod-979205bd-4036-483c-91af-19f7eaf7deff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060382784s
Jan 31 21:10:16.889: INFO: Pod "pod-979205bd-4036-483c-91af-19f7eaf7deff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07005105s
Jan 31 21:10:18.893: INFO: Pod "pod-979205bd-4036-483c-91af-19f7eaf7deff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074398561s
Jan 31 21:10:20.906: INFO: Pod "pod-979205bd-4036-483c-91af-19f7eaf7deff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086752665s
STEP: Saw pod success
Jan 31 21:10:20.906: INFO: Pod "pod-979205bd-4036-483c-91af-19f7eaf7deff" satisfied condition "success or failure"
Jan 31 21:10:20.911: INFO: Trying to get logs from node jerma-node pod pod-979205bd-4036-483c-91af-19f7eaf7deff container test-container: 
STEP: delete the pod
Jan 31 21:10:20.951: INFO: Waiting for pod pod-979205bd-4036-483c-91af-19f7eaf7deff to disappear
Jan 31 21:10:20.957: INFO: Pod pod-979205bd-4036-483c-91af-19f7eaf7deff no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:10:20.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6196" for this suite.

• [SLOW TEST:8.217 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":68,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:10:20.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-k2kg
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 21:10:21.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k2kg" in namespace "subpath-9861" to be "success or failure"
Jan 31 21:10:21.217: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Pending", Reason="", readiness=false. Elapsed: 41.480536ms
Jan 31 21:10:23.226: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050628139s
Jan 31 21:10:25.236: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059933276s
Jan 31 21:10:27.243: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067330218s
Jan 31 21:10:29.250: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 8.074590506s
Jan 31 21:10:31.257: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 10.081764709s
Jan 31 21:10:33.267: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 12.091169121s
Jan 31 21:10:35.276: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 14.100083779s
Jan 31 21:10:37.287: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 16.111472601s
Jan 31 21:10:39.294: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 18.118597419s
Jan 31 21:10:41.316: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 20.140227609s
Jan 31 21:10:43.323: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 22.147315138s
Jan 31 21:10:45.329: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 24.153539891s
Jan 31 21:10:47.426: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Running", Reason="", readiness=true. Elapsed: 26.250745989s
Jan 31 21:10:49.436: INFO: Pod "pod-subpath-test-configmap-k2kg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.260385767s
STEP: Saw pod success
Jan 31 21:10:49.436: INFO: Pod "pod-subpath-test-configmap-k2kg" satisfied condition "success or failure"
Jan 31 21:10:49.443: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-k2kg container test-container-subpath-configmap-k2kg: 
STEP: delete the pod
Jan 31 21:10:49.515: INFO: Waiting for pod pod-subpath-test-configmap-k2kg to disappear
Jan 31 21:10:49.561: INFO: Pod pod-subpath-test-configmap-k2kg no longer exists
STEP: Deleting pod pod-subpath-test-configmap-k2kg
Jan 31 21:10:49.561: INFO: Deleting pod "pod-subpath-test-configmap-k2kg" in namespace "subpath-9861"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:10:49.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9861" for this suite.

• [SLOW TEST:28.607 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":6,"skipped":105,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:10:49.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 31 21:11:01.859: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 21:11:01.867: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 21:11:03.868: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 21:11:03.933: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 21:11:05.868: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 21:11:05.876: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 21:11:07.868: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 21:11:07.895: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 21:11:09.868: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 21:11:09.879: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 21:11:11.867: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 21:11:11.878: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 21:11:13.868: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 21:11:13.876: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:11:13.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7055" for this suite.

• [SLOW TEST:24.328 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":152,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:11:13.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 31 21:11:14.082: INFO: Waiting up to 5m0s for pod "pod-a5298f89-13ea-4646-86f6-6e3950ff3db0" in namespace "emptydir-2138" to be "success or failure"
Jan 31 21:11:14.087: INFO: Pod "pod-a5298f89-13ea-4646-86f6-6e3950ff3db0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.76535ms
Jan 31 21:11:16.106: INFO: Pod "pod-a5298f89-13ea-4646-86f6-6e3950ff3db0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023494496s
Jan 31 21:11:18.116: INFO: Pod "pod-a5298f89-13ea-4646-86f6-6e3950ff3db0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034146465s
Jan 31 21:11:20.124: INFO: Pod "pod-a5298f89-13ea-4646-86f6-6e3950ff3db0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04186149s
Jan 31 21:11:22.135: INFO: Pod "pod-a5298f89-13ea-4646-86f6-6e3950ff3db0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052388157s
STEP: Saw pod success
Jan 31 21:11:22.135: INFO: Pod "pod-a5298f89-13ea-4646-86f6-6e3950ff3db0" satisfied condition "success or failure"
Jan 31 21:11:22.140: INFO: Trying to get logs from node jerma-node pod pod-a5298f89-13ea-4646-86f6-6e3950ff3db0 container test-container: 
STEP: delete the pod
Jan 31 21:11:22.529: INFO: Waiting for pod pod-a5298f89-13ea-4646-86f6-6e3950ff3db0 to disappear
Jan 31 21:11:22.541: INFO: Pod pod-a5298f89-13ea-4646-86f6-6e3950ff3db0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:11:22.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2138" for this suite.

• [SLOW TEST:8.644 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":192,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:11:22.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:11:34.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4737" for this suite.

• [SLOW TEST:11.462 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":9,"skipped":209,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:11:34.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:11:34.289: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f43ff8d0-a63d-4b39-b317-b89bf2607f28", Controller:(*bool)(0xc000e290e2), BlockOwnerDeletion:(*bool)(0xc000e290e3)}}
Jan 31 21:11:34.382: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"719b9716-32fa-4ce4-96fe-f3d9de266e38", Controller:(*bool)(0xc000e29286), BlockOwnerDeletion:(*bool)(0xc000e29287)}}
Jan 31 21:11:34.419: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4e4943d2-29a4-4d03-af15-c078855a5a43", Controller:(*bool)(0xc002955cf6), BlockOwnerDeletion:(*bool)(0xc002955cf7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:11:39.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-394" for this suite.

• [SLOW TEST:5.489 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":10,"skipped":214,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:11:39.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:11:56.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5765" for this suite.

• [SLOW TEST:16.666 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":11,"skipped":219,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:11:56.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-60403528-8b85-4c32-b126-851206e0367d
STEP: Creating a pod to test consume configMaps
Jan 31 21:11:56.279: INFO: Waiting up to 5m0s for pod "pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead" in namespace "configmap-1807" to be "success or failure"
Jan 31 21:11:56.303: INFO: Pod "pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead": Phase="Pending", Reason="", readiness=false. Elapsed: 23.904288ms
Jan 31 21:11:58.312: INFO: Pod "pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032791014s
Jan 31 21:12:00.318: INFO: Pod "pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038796869s
Jan 31 21:12:02.323: INFO: Pod "pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044535247s
Jan 31 21:12:04.328: INFO: Pod "pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049559025s
STEP: Saw pod success
Jan 31 21:12:04.328: INFO: Pod "pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead" satisfied condition "success or failure"
Jan 31 21:12:04.332: INFO: Trying to get logs from node jerma-node pod pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead container configmap-volume-test: 
STEP: delete the pod
Jan 31 21:12:04.375: INFO: Waiting for pod pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead to disappear
Jan 31 21:12:04.395: INFO: Pod pod-configmaps-edf258d8-e56f-4bc4-abbf-9cfd3a2e8ead no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:12:04.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1807" for this suite.

• [SLOW TEST:8.234 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":232,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:12:04.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 31 21:12:20.795: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 21:12:20.799: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 21:12:22.799: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 21:12:22.808: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 21:12:24.799: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 21:12:24.807: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 21:12:26.799: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 21:12:26.807: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 21:12:28.799: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 21:12:28.805: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 21:12:30.799: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 21:12:30.807: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 21:12:32.800: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 21:12:32.807: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:12:32.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6165" for this suite.

• [SLOW TEST:28.414 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":267,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:12:32.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0131 21:12:35.956206       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 21:12:35.956: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:12:35.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4031" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":14,"skipped":280,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:12:36.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-d8c46080-cac8-434a-b0bd-569012e35ea5 in namespace container-probe-1103
Jan 31 21:12:47.603: INFO: Started pod test-webserver-d8c46080-cac8-434a-b0bd-569012e35ea5 in namespace container-probe-1103
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 21:12:47.610: INFO: Initial restart count of pod test-webserver-d8c46080-cac8-434a-b0bd-569012e35ea5 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:16:49.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1103" for this suite.

• [SLOW TEST:253.212 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":281,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:16:49.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-dd8981ef-df11-4af0-b2e8-fcd9772ff8aa
STEP: Creating a pod to test consume secrets
Jan 31 21:16:49.529: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1" in namespace "projected-4438" to be "success or failure"
Jan 31 21:16:49.540: INFO: Pod "pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.849849ms
Jan 31 21:16:51.551: INFO: Pod "pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022027348s
Jan 31 21:16:53.562: INFO: Pod "pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03353989s
Jan 31 21:16:55.571: INFO: Pod "pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04232106s
Jan 31 21:16:57.580: INFO: Pod "pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051403414s
STEP: Saw pod success
Jan 31 21:16:57.580: INFO: Pod "pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1" satisfied condition "success or failure"
Jan 31 21:16:57.585: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 21:16:57.668: INFO: Waiting for pod pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1 to disappear
Jan 31 21:16:57.680: INFO: Pod pod-projected-secrets-048acfe7-e396-4ef1-87e2-8599fc314ad1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:16:57.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4438" for this suite.

• [SLOW TEST:8.256 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":292,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:16:57.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:16:57.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-2311
I0131 21:16:57.883788       8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2311, replica count: 1
I0131 21:16:58.934764       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:16:59.935116       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:17:00.935462       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:17:01.935863       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:17:02.936438       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:17:03.936954       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 21:17:04.052: INFO: Created: latency-svc-l8qj8
Jan 31 21:17:04.149: INFO: Got endpoints: latency-svc-l8qj8 [112.339277ms]
Jan 31 21:17:04.203: INFO: Created: latency-svc-9j4kh
Jan 31 21:17:04.221: INFO: Got endpoints: latency-svc-9j4kh [71.844818ms]
Jan 31 21:17:04.342: INFO: Created: latency-svc-z9s62
Jan 31 21:17:04.350: INFO: Got endpoints: latency-svc-z9s62 [200.704905ms]
Jan 31 21:17:04.382: INFO: Created: latency-svc-jrhnw
Jan 31 21:17:04.392: INFO: Got endpoints: latency-svc-jrhnw [241.812336ms]
Jan 31 21:17:04.499: INFO: Created: latency-svc-glz82
Jan 31 21:17:04.504: INFO: Got endpoints: latency-svc-glz82 [353.866199ms]
Jan 31 21:17:04.533: INFO: Created: latency-svc-t998n
Jan 31 21:17:04.548: INFO: Got endpoints: latency-svc-t998n [397.634142ms]
Jan 31 21:17:04.571: INFO: Created: latency-svc-zp6r5
Jan 31 21:17:04.576: INFO: Got endpoints: latency-svc-zp6r5 [426.120085ms]
Jan 31 21:17:04.652: INFO: Created: latency-svc-w6wdf
Jan 31 21:17:04.663: INFO: Got endpoints: latency-svc-w6wdf [512.156058ms]
Jan 31 21:17:04.704: INFO: Created: latency-svc-wmn95
Jan 31 21:17:04.708: INFO: Got endpoints: latency-svc-wmn95 [558.059341ms]
Jan 31 21:17:04.830: INFO: Created: latency-svc-5bx57
Jan 31 21:17:04.893: INFO: Got endpoints: latency-svc-5bx57 [742.113942ms]
Jan 31 21:17:04.896: INFO: Created: latency-svc-4nvn7
Jan 31 21:17:04.904: INFO: Got endpoints: latency-svc-4nvn7 [753.587709ms]
Jan 31 21:17:05.002: INFO: Created: latency-svc-mw77v
Jan 31 21:17:05.060: INFO: Got endpoints: latency-svc-mw77v [909.433424ms]
Jan 31 21:17:05.064: INFO: Created: latency-svc-fss6c
Jan 31 21:17:05.085: INFO: Got endpoints: latency-svc-fss6c [934.202171ms]
Jan 31 21:17:05.156: INFO: Created: latency-svc-mdvsl
Jan 31 21:17:05.809: INFO: Got endpoints: latency-svc-mdvsl [1.658755777s]
Jan 31 21:17:05.843: INFO: Created: latency-svc-qwp4t
Jan 31 21:17:05.956: INFO: Got endpoints: latency-svc-qwp4t [1.806116967s]
Jan 31 21:17:05.986: INFO: Created: latency-svc-svb4v
Jan 31 21:17:06.018: INFO: Got endpoints: latency-svc-svb4v [1.867310166s]
Jan 31 21:17:06.023: INFO: Created: latency-svc-tstg6
Jan 31 21:17:06.103: INFO: Got endpoints: latency-svc-tstg6 [1.88148369s]
Jan 31 21:17:06.121: INFO: Created: latency-svc-2tmc8
Jan 31 21:17:06.132: INFO: Got endpoints: latency-svc-2tmc8 [1.781119412s]
Jan 31 21:17:06.198: INFO: Created: latency-svc-hjcxn
Jan 31 21:17:06.252: INFO: Got endpoints: latency-svc-hjcxn [1.860075275s]
Jan 31 21:17:06.276: INFO: Created: latency-svc-dw5mw
Jan 31 21:17:06.277: INFO: Got endpoints: latency-svc-dw5mw [1.773473752s]
Jan 31 21:17:06.311: INFO: Created: latency-svc-6hhgc
Jan 31 21:17:06.328: INFO: Got endpoints: latency-svc-6hhgc [1.780061629s]
Jan 31 21:17:06.352: INFO: Created: latency-svc-vq52v
Jan 31 21:17:06.418: INFO: Got endpoints: latency-svc-vq52v [1.841230652s]
Jan 31 21:17:06.448: INFO: Created: latency-svc-fks6c
Jan 31 21:17:06.463: INFO: Got endpoints: latency-svc-fks6c [1.800271641s]
Jan 31 21:17:06.469: INFO: Created: latency-svc-vbfd5
Jan 31 21:17:06.486: INFO: Got endpoints: latency-svc-vbfd5 [1.777837022s]
Jan 31 21:17:06.492: INFO: Created: latency-svc-wrxqf
Jan 31 21:17:06.595: INFO: Got endpoints: latency-svc-wrxqf [1.702537205s]
Jan 31 21:17:06.617: INFO: Created: latency-svc-v2zvz
Jan 31 21:17:06.628: INFO: Got endpoints: latency-svc-v2zvz [1.724437127s]
Jan 31 21:17:06.664: INFO: Created: latency-svc-lqhql
Jan 31 21:17:06.667: INFO: Got endpoints: latency-svc-lqhql [1.607390772s]
Jan 31 21:17:06.695: INFO: Created: latency-svc-mxhgw
Jan 31 21:17:06.786: INFO: Got endpoints: latency-svc-mxhgw [1.700803017s]
Jan 31 21:17:06.837: INFO: Created: latency-svc-sqth9
Jan 31 21:17:06.845: INFO: Got endpoints: latency-svc-sqth9 [1.035712257s]
Jan 31 21:17:06.886: INFO: Created: latency-svc-c7rpr
Jan 31 21:17:06.942: INFO: Got endpoints: latency-svc-c7rpr [985.248582ms]
Jan 31 21:17:06.970: INFO: Created: latency-svc-6q5pq
Jan 31 21:17:06.973: INFO: Got endpoints: latency-svc-6q5pq [955.140273ms]
Jan 31 21:17:07.146: INFO: Created: latency-svc-6qwf7
Jan 31 21:17:07.150: INFO: Got endpoints: latency-svc-6qwf7 [1.047252907s]
Jan 31 21:17:07.184: INFO: Created: latency-svc-kz969
Jan 31 21:17:07.190: INFO: Got endpoints: latency-svc-kz969 [1.058312692s]
Jan 31 21:17:07.221: INFO: Created: latency-svc-dntnr
Jan 31 21:17:07.228: INFO: Got endpoints: latency-svc-dntnr [976.354975ms]
Jan 31 21:17:07.297: INFO: Created: latency-svc-fmfzr
Jan 31 21:17:07.306: INFO: Got endpoints: latency-svc-fmfzr [1.028588778s]
Jan 31 21:17:07.326: INFO: Created: latency-svc-h4pqp
Jan 31 21:17:07.355: INFO: Got endpoints: latency-svc-h4pqp [1.026621152s]
Jan 31 21:17:07.359: INFO: Created: latency-svc-c68x6
Jan 31 21:17:07.368: INFO: Got endpoints: latency-svc-c68x6 [949.744875ms]
Jan 31 21:17:07.372: INFO: Created: latency-svc-stz2c
Jan 31 21:17:07.387: INFO: Got endpoints: latency-svc-stz2c [924.078462ms]
Jan 31 21:17:07.492: INFO: Created: latency-svc-gw84x
Jan 31 21:17:07.518: INFO: Got endpoints: latency-svc-gw84x [1.031261781s]
Jan 31 21:17:07.526: INFO: Created: latency-svc-9wbdp
Jan 31 21:17:07.529: INFO: Got endpoints: latency-svc-9wbdp [933.238503ms]
Jan 31 21:17:07.592: INFO: Created: latency-svc-9f2sr
Jan 31 21:17:07.711: INFO: Got endpoints: latency-svc-9f2sr [1.082731146s]
Jan 31 21:17:07.719: INFO: Created: latency-svc-92s8c
Jan 31 21:17:07.721: INFO: Got endpoints: latency-svc-92s8c [1.053267348s]
Jan 31 21:17:07.753: INFO: Created: latency-svc-lqqm7
Jan 31 21:17:07.765: INFO: Got endpoints: latency-svc-lqqm7 [978.994123ms]
Jan 31 21:17:07.787: INFO: Created: latency-svc-42cb8
Jan 31 21:17:07.809: INFO: Got endpoints: latency-svc-42cb8 [963.829246ms]
Jan 31 21:17:07.810: INFO: Created: latency-svc-hl9h2
Jan 31 21:17:07.895: INFO: Got endpoints: latency-svc-hl9h2 [953.357017ms]
Jan 31 21:17:07.968: INFO: Created: latency-svc-p4jct
Jan 31 21:17:07.983: INFO: Got endpoints: latency-svc-p4jct [1.009625921s]
Jan 31 21:17:08.060: INFO: Created: latency-svc-zjlw9
Jan 31 21:17:08.096: INFO: Got endpoints: latency-svc-zjlw9 [946.091441ms]
Jan 31 21:17:08.110: INFO: Created: latency-svc-g5pvj
Jan 31 21:17:08.117: INFO: Created: latency-svc-xldnc
Jan 31 21:17:08.119: INFO: Got endpoints: latency-svc-g5pvj [929.083927ms]
Jan 31 21:17:08.124: INFO: Got endpoints: latency-svc-xldnc [895.514901ms]
Jan 31 21:17:08.209: INFO: Created: latency-svc-j4frk
Jan 31 21:17:08.213: INFO: Got endpoints: latency-svc-j4frk [906.481207ms]
Jan 31 21:17:08.244: INFO: Created: latency-svc-hv8fw
Jan 31 21:17:08.261: INFO: Got endpoints: latency-svc-hv8fw [906.045457ms]
Jan 31 21:17:08.286: INFO: Created: latency-svc-rfrrk
Jan 31 21:17:08.287: INFO: Got endpoints: latency-svc-rfrrk [918.708466ms]
Jan 31 21:17:08.390: INFO: Created: latency-svc-6bqtt
Jan 31 21:17:08.395: INFO: Got endpoints: latency-svc-6bqtt [1.007459809s]
Jan 31 21:17:08.426: INFO: Created: latency-svc-xzgvs
Jan 31 21:17:08.448: INFO: Got endpoints: latency-svc-xzgvs [929.745587ms]
Jan 31 21:17:08.480: INFO: Created: latency-svc-xhzxq
Jan 31 21:17:08.485: INFO: Got endpoints: latency-svc-xhzxq [955.973541ms]
Jan 31 21:17:08.564: INFO: Created: latency-svc-r9s79
Jan 31 21:17:08.595: INFO: Got endpoints: latency-svc-r9s79 [883.396286ms]
Jan 31 21:17:08.598: INFO: Created: latency-svc-kmbtk
Jan 31 21:17:08.614: INFO: Got endpoints: latency-svc-kmbtk [892.759514ms]
Jan 31 21:17:08.628: INFO: Created: latency-svc-ng7xb
Jan 31 21:17:08.633: INFO: Got endpoints: latency-svc-ng7xb [867.757098ms]
Jan 31 21:17:08.714: INFO: Created: latency-svc-2txvn
Jan 31 21:17:08.720: INFO: Got endpoints: latency-svc-2txvn [910.190173ms]
Jan 31 21:17:08.920: INFO: Created: latency-svc-sc85f
Jan 31 21:17:08.926: INFO: Got endpoints: latency-svc-sc85f [1.030387087s]
Jan 31 21:17:08.960: INFO: Created: latency-svc-hx5z4
Jan 31 21:17:08.964: INFO: Got endpoints: latency-svc-hx5z4 [981.713787ms]
Jan 31 21:17:08.986: INFO: Created: latency-svc-7glrf
Jan 31 21:17:08.995: INFO: Got endpoints: latency-svc-7glrf [898.508804ms]
Jan 31 21:17:09.018: INFO: Created: latency-svc-dqtqt
Jan 31 21:17:09.112: INFO: Got endpoints: latency-svc-dqtqt [993.053322ms]
Jan 31 21:17:09.115: INFO: Created: latency-svc-zvxlb
Jan 31 21:17:09.127: INFO: Got endpoints: latency-svc-zvxlb [1.002644692s]
Jan 31 21:17:09.160: INFO: Created: latency-svc-58thv
Jan 31 21:17:09.170: INFO: Got endpoints: latency-svc-58thv [957.518656ms]
Jan 31 21:17:09.186: INFO: Created: latency-svc-b9w7n
Jan 31 21:17:09.192: INFO: Got endpoints: latency-svc-b9w7n [930.566716ms]
Jan 31 21:17:09.313: INFO: Created: latency-svc-l4qf9
Jan 31 21:17:09.315: INFO: Got endpoints: latency-svc-l4qf9 [1.028205397s]
Jan 31 21:17:09.343: INFO: Created: latency-svc-qvzpd
Jan 31 21:17:09.346: INFO: Got endpoints: latency-svc-qvzpd [950.387818ms]
Jan 31 21:17:09.369: INFO: Created: latency-svc-9p7wj
Jan 31 21:17:09.371: INFO: Got endpoints: latency-svc-9p7wj [923.430637ms]
Jan 31 21:17:09.529: INFO: Created: latency-svc-fsgtx
Jan 31 21:17:09.530: INFO: Got endpoints: latency-svc-fsgtx [1.04422487s]
Jan 31 21:17:09.564: INFO: Created: latency-svc-tjvj8
Jan 31 21:17:09.568: INFO: Got endpoints: latency-svc-tjvj8 [972.843028ms]
Jan 31 21:17:09.604: INFO: Created: latency-svc-h52tn
Jan 31 21:17:09.610: INFO: Got endpoints: latency-svc-h52tn [996.116093ms]
Jan 31 21:17:09.712: INFO: Created: latency-svc-8gx4d
Jan 31 21:17:09.715: INFO: Got endpoints: latency-svc-8gx4d [1.081741256s]
Jan 31 21:17:09.771: INFO: Created: latency-svc-t8mdh
Jan 31 21:17:09.794: INFO: Got endpoints: latency-svc-t8mdh [1.073436882s]
Jan 31 21:17:09.903: INFO: Created: latency-svc-q4mfr
Jan 31 21:17:09.940: INFO: Created: latency-svc-z2r7v
Jan 31 21:17:09.941: INFO: Got endpoints: latency-svc-q4mfr [1.014940962s]
Jan 31 21:17:09.965: INFO: Got endpoints: latency-svc-z2r7v [1.000975563s]
Jan 31 21:17:09.978: INFO: Created: latency-svc-527c8
Jan 31 21:17:09.983: INFO: Got endpoints: latency-svc-527c8 [987.771639ms]
Jan 31 21:17:10.071: INFO: Created: latency-svc-m7hvm
Jan 31 21:17:10.072: INFO: Got endpoints: latency-svc-m7hvm [959.054926ms]
Jan 31 21:17:10.097: INFO: Created: latency-svc-5q5vw
Jan 31 21:17:10.103: INFO: Got endpoints: latency-svc-5q5vw [976.539208ms]
Jan 31 21:17:10.163: INFO: Created: latency-svc-6gkgh
Jan 31 21:17:10.228: INFO: Got endpoints: latency-svc-6gkgh [1.05797271s]
Jan 31 21:17:10.235: INFO: Created: latency-svc-8wg8x
Jan 31 21:17:10.241: INFO: Got endpoints: latency-svc-8wg8x [1.049450601s]
Jan 31 21:17:10.268: INFO: Created: latency-svc-6n95g
Jan 31 21:17:10.273: INFO: Got endpoints: latency-svc-6n95g [957.880128ms]
Jan 31 21:17:10.310: INFO: Created: latency-svc-rgrsl
Jan 31 21:17:10.318: INFO: Got endpoints: latency-svc-rgrsl [972.778986ms]
Jan 31 21:17:10.406: INFO: Created: latency-svc-mqv22
Jan 31 21:17:10.445: INFO: Got endpoints: latency-svc-mqv22 [1.073279042s]
Jan 31 21:17:10.451: INFO: Created: latency-svc-x9hlq
Jan 31 21:17:10.452: INFO: Got endpoints: latency-svc-x9hlq [922.665926ms]
Jan 31 21:17:10.481: INFO: Created: latency-svc-mpcxx
Jan 31 21:17:10.491: INFO: Got endpoints: latency-svc-mpcxx [922.396809ms]
Jan 31 21:17:10.627: INFO: Created: latency-svc-zhm49
Jan 31 21:17:10.636: INFO: Got endpoints: latency-svc-zhm49 [1.025904144s]
Jan 31 21:17:10.652: INFO: Created: latency-svc-zccqr
Jan 31 21:17:10.660: INFO: Got endpoints: latency-svc-zccqr [945.44278ms]
Jan 31 21:17:10.688: INFO: Created: latency-svc-g6l79
Jan 31 21:17:10.701: INFO: Got endpoints: latency-svc-g6l79 [907.186791ms]
Jan 31 21:17:10.721: INFO: Created: latency-svc-vzhfv
Jan 31 21:17:10.857: INFO: Got endpoints: latency-svc-vzhfv [916.16902ms]
Jan 31 21:17:10.873: INFO: Created: latency-svc-42g72
Jan 31 21:17:10.874: INFO: Got endpoints: latency-svc-42g72 [908.279418ms]
Jan 31 21:17:10.915: INFO: Created: latency-svc-k4thf
Jan 31 21:17:10.928: INFO: Got endpoints: latency-svc-k4thf [944.533523ms]
Jan 31 21:17:10.960: INFO: Created: latency-svc-skcm9
Jan 31 21:17:11.098: INFO: Got endpoints: latency-svc-skcm9 [1.026144598s]
Jan 31 21:17:11.105: INFO: Created: latency-svc-bjqxf
Jan 31 21:17:11.114: INFO: Got endpoints: latency-svc-bjqxf [1.010119386s]
Jan 31 21:17:11.152: INFO: Created: latency-svc-pdvkd
Jan 31 21:17:11.161: INFO: Got endpoints: latency-svc-pdvkd [933.101662ms]
Jan 31 21:17:11.187: INFO: Created: latency-svc-jt2ht
Jan 31 21:17:11.241: INFO: Got endpoints: latency-svc-jt2ht [999.909235ms]
Jan 31 21:17:11.243: INFO: Created: latency-svc-xd9mt
Jan 31 21:17:11.246: INFO: Got endpoints: latency-svc-xd9mt [973.038538ms]
Jan 31 21:17:11.285: INFO: Created: latency-svc-7fpz4
Jan 31 21:17:11.291: INFO: Got endpoints: latency-svc-7fpz4 [972.066186ms]
Jan 31 21:17:11.314: INFO: Created: latency-svc-m9ckd
Jan 31 21:17:11.322: INFO: Got endpoints: latency-svc-m9ckd [877.051756ms]
Jan 31 21:17:11.392: INFO: Created: latency-svc-9vjd6
Jan 31 21:17:11.464: INFO: Got endpoints: latency-svc-9vjd6 [1.011578427s]
Jan 31 21:17:11.468: INFO: Created: latency-svc-dhw5t
Jan 31 21:17:11.471: INFO: Got endpoints: latency-svc-dhw5t [980.558417ms]
Jan 31 21:17:11.548: INFO: Created: latency-svc-ckhrh
Jan 31 21:17:11.552: INFO: Got endpoints: latency-svc-ckhrh [915.157077ms]
Jan 31 21:17:11.582: INFO: Created: latency-svc-zqjdh
Jan 31 21:17:11.583: INFO: Got endpoints: latency-svc-zqjdh [922.983237ms]
Jan 31 21:17:11.604: INFO: Created: latency-svc-xngm7
Jan 31 21:17:11.610: INFO: Got endpoints: latency-svc-xngm7 [908.956762ms]
Jan 31 21:17:11.677: INFO: Created: latency-svc-nzrph
Jan 31 21:17:11.685: INFO: Got endpoints: latency-svc-nzrph [827.331259ms]
Jan 31 21:17:11.728: INFO: Created: latency-svc-8kvz2
Jan 31 21:17:11.737: INFO: Got endpoints: latency-svc-8kvz2 [862.978269ms]
Jan 31 21:17:11.836: INFO: Created: latency-svc-mnkkw
Jan 31 21:17:11.901: INFO: Got endpoints: latency-svc-mnkkw [973.263496ms]
Jan 31 21:17:11.902: INFO: Created: latency-svc-57lwf
Jan 31 21:17:11.924: INFO: Got endpoints: latency-svc-57lwf [825.429472ms]
Jan 31 21:17:11.975: INFO: Created: latency-svc-hkmg2
Jan 31 21:17:12.003: INFO: Created: latency-svc-vv2xc
Jan 31 21:17:12.005: INFO: Got endpoints: latency-svc-hkmg2 [891.342934ms]
Jan 31 21:17:12.029: INFO: Got endpoints: latency-svc-vv2xc [867.539488ms]
Jan 31 21:17:12.032: INFO: Created: latency-svc-qjwrx
Jan 31 21:17:12.044: INFO: Got endpoints: latency-svc-qjwrx [802.736523ms]
Jan 31 21:17:12.058: INFO: Created: latency-svc-r8nkl
Jan 31 21:17:12.157: INFO: Got endpoints: latency-svc-r8nkl [910.749913ms]
Jan 31 21:17:12.162: INFO: Created: latency-svc-wh2gf
Jan 31 21:17:12.164: INFO: Got endpoints: latency-svc-wh2gf [873.520181ms]
Jan 31 21:17:12.185: INFO: Created: latency-svc-24zgg
Jan 31 21:17:12.249: INFO: Got endpoints: latency-svc-24zgg [926.573816ms]
Jan 31 21:17:12.308: INFO: Created: latency-svc-ck6zj
Jan 31 21:17:12.312: INFO: Got endpoints: latency-svc-ck6zj [847.544345ms]
Jan 31 21:17:12.339: INFO: Created: latency-svc-cms8q
Jan 31 21:17:12.347: INFO: Got endpoints: latency-svc-cms8q [875.492194ms]
Jan 31 21:17:12.367: INFO: Created: latency-svc-plhdm
Jan 31 21:17:12.369: INFO: Got endpoints: latency-svc-plhdm [817.601802ms]
Jan 31 21:17:12.396: INFO: Created: latency-svc-gjl9h
Jan 31 21:17:12.400: INFO: Got endpoints: latency-svc-gjl9h [816.941112ms]
Jan 31 21:17:12.507: INFO: Created: latency-svc-6pm9l
Jan 31 21:17:12.514: INFO: Got endpoints: latency-svc-6pm9l [903.414623ms]
Jan 31 21:17:12.571: INFO: Created: latency-svc-lf2hv
Jan 31 21:17:12.597: INFO: Got endpoints: latency-svc-lf2hv [912.581205ms]
Jan 31 21:17:12.717: INFO: Created: latency-svc-sqqhb
Jan 31 21:17:12.732: INFO: Got endpoints: latency-svc-sqqhb [994.868647ms]
Jan 31 21:17:12.754: INFO: Created: latency-svc-nfrnd
Jan 31 21:17:12.758: INFO: Got endpoints: latency-svc-nfrnd [856.608979ms]
Jan 31 21:17:12.806: INFO: Created: latency-svc-4m55n
Jan 31 21:17:12.912: INFO: Got endpoints: latency-svc-4m55n [988.65444ms]
Jan 31 21:17:12.969: INFO: Created: latency-svc-qs9rm
Jan 31 21:17:12.975: INFO: Got endpoints: latency-svc-qs9rm [970.164642ms]
Jan 31 21:17:12.997: INFO: Created: latency-svc-76vzh
Jan 31 21:17:13.004: INFO: Got endpoints: latency-svc-76vzh [974.804389ms]
Jan 31 21:17:13.081: INFO: Created: latency-svc-zz66n
Jan 31 21:17:13.094: INFO: Got endpoints: latency-svc-zz66n [1.049324744s]
Jan 31 21:17:13.131: INFO: Created: latency-svc-6sfx4
Jan 31 21:17:13.140: INFO: Got endpoints: latency-svc-6sfx4 [982.574044ms]
Jan 31 21:17:13.260: INFO: Created: latency-svc-ks64b
Jan 31 21:17:13.276: INFO: Got endpoints: latency-svc-ks64b [1.111826918s]
Jan 31 21:17:13.311: INFO: Created: latency-svc-l26rq
Jan 31 21:17:13.315: INFO: Got endpoints: latency-svc-l26rq [1.066295327s]
Jan 31 21:17:13.358: INFO: Created: latency-svc-spxh8
Jan 31 21:17:13.424: INFO: Got endpoints: latency-svc-spxh8 [1.112546617s]
Jan 31 21:17:13.456: INFO: Created: latency-svc-xsnjt
Jan 31 21:17:13.467: INFO: Got endpoints: latency-svc-xsnjt [1.120180113s]
Jan 31 21:17:13.484: INFO: Created: latency-svc-7dnd2
Jan 31 21:17:13.509: INFO: Got endpoints: latency-svc-7dnd2 [1.139581267s]
Jan 31 21:17:13.512: INFO: Created: latency-svc-2txpx
Jan 31 21:17:13.515: INFO: Got endpoints: latency-svc-2txpx [1.114978945s]
Jan 31 21:17:13.607: INFO: Created: latency-svc-trn8b
Jan 31 21:17:13.610: INFO: Got endpoints: latency-svc-trn8b [1.095998757s]
Jan 31 21:17:13.756: INFO: Created: latency-svc-ffscm
Jan 31 21:17:13.759: INFO: Got endpoints: latency-svc-ffscm [1.16182487s]
Jan 31 21:17:13.801: INFO: Created: latency-svc-82tqh
Jan 31 21:17:13.812: INFO: Got endpoints: latency-svc-82tqh [1.080253013s]
Jan 31 21:17:13.932: INFO: Created: latency-svc-fhmv9
Jan 31 21:17:13.974: INFO: Got endpoints: latency-svc-fhmv9 [1.216277279s]
Jan 31 21:17:13.977: INFO: Created: latency-svc-shmcp
Jan 31 21:17:13.987: INFO: Got endpoints: latency-svc-shmcp [1.074818204s]
Jan 31 21:17:14.023: INFO: Created: latency-svc-qxhz7
Jan 31 21:17:14.028: INFO: Got endpoints: latency-svc-qxhz7 [1.052754522s]
Jan 31 21:17:14.150: INFO: Created: latency-svc-zllmj
Jan 31 21:17:14.154: INFO: Got endpoints: latency-svc-zllmj [1.149175122s]
Jan 31 21:17:14.210: INFO: Created: latency-svc-2gcdx
Jan 31 21:17:14.226: INFO: Got endpoints: latency-svc-2gcdx [1.132417953s]
Jan 31 21:17:14.360: INFO: Created: latency-svc-tzpgs
Jan 31 21:17:14.382: INFO: Got endpoints: latency-svc-tzpgs [1.242494524s]
Jan 31 21:17:14.386: INFO: Created: latency-svc-nswvl
Jan 31 21:17:14.418: INFO: Got endpoints: latency-svc-nswvl [1.141499832s]
Jan 31 21:17:14.461: INFO: Created: latency-svc-djvvs
Jan 31 21:17:14.521: INFO: Got endpoints: latency-svc-djvvs [1.205877716s]
Jan 31 21:17:14.550: INFO: Created: latency-svc-t6f2l
Jan 31 21:17:14.566: INFO: Got endpoints: latency-svc-t6f2l [1.141760625s]
Jan 31 21:17:14.589: INFO: Created: latency-svc-7qfq2
Jan 31 21:17:14.601: INFO: Got endpoints: latency-svc-7qfq2 [1.133327149s]
Jan 31 21:17:14.668: INFO: Created: latency-svc-sdb8r
Jan 31 21:17:14.694: INFO: Got endpoints: latency-svc-sdb8r [1.185150514s]
Jan 31 21:17:14.698: INFO: Created: latency-svc-ztwdd
Jan 31 21:17:14.809: INFO: Got endpoints: latency-svc-ztwdd [1.294046829s]
Jan 31 21:17:14.822: INFO: Created: latency-svc-sknkl
Jan 31 21:17:14.831: INFO: Got endpoints: latency-svc-sknkl [1.220401565s]
Jan 31 21:17:14.870: INFO: Created: latency-svc-8j25v
Jan 31 21:17:14.871: INFO: Got endpoints: latency-svc-8j25v [1.111692859s]
Jan 31 21:17:14.898: INFO: Created: latency-svc-pqmnf
Jan 31 21:17:14.989: INFO: Got endpoints: latency-svc-pqmnf [1.176680666s]
Jan 31 21:17:14.990: INFO: Created: latency-svc-d6m7m
Jan 31 21:17:15.019: INFO: Got endpoints: latency-svc-d6m7m [1.045036783s]
Jan 31 21:17:15.024: INFO: Created: latency-svc-4ljzx
Jan 31 21:17:15.024: INFO: Got endpoints: latency-svc-4ljzx [1.036618893s]
Jan 31 21:17:15.060: INFO: Created: latency-svc-gn99f
Jan 31 21:17:15.065: INFO: Got endpoints: latency-svc-gn99f [1.036704714s]
Jan 31 21:17:15.177: INFO: Created: latency-svc-pbvq4
Jan 31 21:17:15.184: INFO: Got endpoints: latency-svc-pbvq4 [1.030447405s]
Jan 31 21:17:15.243: INFO: Created: latency-svc-zqst2
Jan 31 21:17:15.256: INFO: Got endpoints: latency-svc-zqst2 [1.029367049s]
Jan 31 21:17:15.272: INFO: Created: latency-svc-j88d6
Jan 31 21:17:15.331: INFO: Got endpoints: latency-svc-j88d6 [948.783274ms]
Jan 31 21:17:15.338: INFO: Created: latency-svc-g2xxq
Jan 31 21:17:15.339: INFO: Got endpoints: latency-svc-g2xxq [921.031257ms]
Jan 31 21:17:15.383: INFO: Created: latency-svc-nvbq6
Jan 31 21:17:15.389: INFO: Got endpoints: latency-svc-nvbq6 [867.011715ms]
Jan 31 21:17:15.428: INFO: Created: latency-svc-gm5sl
Jan 31 21:17:15.469: INFO: Got endpoints: latency-svc-gm5sl [902.041643ms]
Jan 31 21:17:15.484: INFO: Created: latency-svc-254j2
Jan 31 21:17:15.497: INFO: Got endpoints: latency-svc-254j2 [896.204932ms]
Jan 31 21:17:15.558: INFO: Created: latency-svc-hrxhz
Jan 31 21:17:15.567: INFO: Got endpoints: latency-svc-hrxhz [872.971282ms]
Jan 31 21:17:15.626: INFO: Created: latency-svc-b2lpm
Jan 31 21:17:15.634: INFO: Got endpoints: latency-svc-b2lpm [823.838576ms]
Jan 31 21:17:15.658: INFO: Created: latency-svc-7qclt
Jan 31 21:17:15.664: INFO: Got endpoints: latency-svc-7qclt [833.243291ms]
Jan 31 21:17:15.687: INFO: Created: latency-svc-f9t5q
Jan 31 21:17:15.693: INFO: Got endpoints: latency-svc-f9t5q [822.079637ms]
Jan 31 21:17:15.755: INFO: Created: latency-svc-j9dqw
Jan 31 21:17:15.759: INFO: Got endpoints: latency-svc-j9dqw [770.090992ms]
Jan 31 21:17:15.818: INFO: Created: latency-svc-99bnw
Jan 31 21:17:15.942: INFO: Got endpoints: latency-svc-99bnw [922.421645ms]
Jan 31 21:17:15.961: INFO: Created: latency-svc-4fw44
Jan 31 21:17:15.983: INFO: Got endpoints: latency-svc-4fw44 [959.050804ms]
Jan 31 21:17:16.078: INFO: Created: latency-svc-sxvqj
Jan 31 21:17:16.078: INFO: Got endpoints: latency-svc-sxvqj [1.01306149s]
Jan 31 21:17:16.105: INFO: Created: latency-svc-pdlzl
Jan 31 21:17:16.113: INFO: Got endpoints: latency-svc-pdlzl [928.086202ms]
Jan 31 21:17:16.135: INFO: Created: latency-svc-bcz9v
Jan 31 21:17:16.155: INFO: Got endpoints: latency-svc-bcz9v [899.497964ms]
Jan 31 21:17:16.204: INFO: Created: latency-svc-vr4fc
Jan 31 21:17:16.213: INFO: Got endpoints: latency-svc-vr4fc [881.896038ms]
Jan 31 21:17:16.228: INFO: Created: latency-svc-mmhtf
Jan 31 21:17:16.245: INFO: Got endpoints: latency-svc-mmhtf [906.381167ms]
Jan 31 21:17:16.247: INFO: Created: latency-svc-8xh6f
Jan 31 21:17:16.270: INFO: Got endpoints: latency-svc-8xh6f [881.380262ms]
Jan 31 21:17:16.277: INFO: Created: latency-svc-4v8kb
Jan 31 21:17:16.279: INFO: Got endpoints: latency-svc-4v8kb [809.958565ms]
Jan 31 21:17:16.376: INFO: Created: latency-svc-lxnk8
Jan 31 21:17:16.380: INFO: Got endpoints: latency-svc-lxnk8 [882.510745ms]
Jan 31 21:17:16.409: INFO: Created: latency-svc-5vnc2
Jan 31 21:17:16.420: INFO: Got endpoints: latency-svc-5vnc2 [852.610975ms]
Jan 31 21:17:16.445: INFO: Created: latency-svc-59b6s
Jan 31 21:17:16.454: INFO: Got endpoints: latency-svc-59b6s [820.579354ms]
Jan 31 21:17:16.520: INFO: Created: latency-svc-29qc4
Jan 31 21:17:16.549: INFO: Got endpoints: latency-svc-29qc4 [884.827247ms]
Jan 31 21:17:16.553: INFO: Created: latency-svc-2x77b
Jan 31 21:17:16.588: INFO: Got endpoints: latency-svc-2x77b [894.191226ms]
Jan 31 21:17:16.613: INFO: Created: latency-svc-nq565
Jan 31 21:17:16.662: INFO: Got endpoints: latency-svc-nq565 [902.622092ms]
Jan 31 21:17:16.673: INFO: Created: latency-svc-mdpbr
Jan 31 21:17:16.687: INFO: Got endpoints: latency-svc-mdpbr [744.933892ms]
Jan 31 21:17:16.747: INFO: Created: latency-svc-tnkfw
Jan 31 21:17:16.815: INFO: Got endpoints: latency-svc-tnkfw [831.651489ms]
Jan 31 21:17:16.827: INFO: Created: latency-svc-vvwjn
Jan 31 21:17:16.837: INFO: Got endpoints: latency-svc-vvwjn [758.889612ms]
Jan 31 21:17:16.882: INFO: Created: latency-svc-twqkp
Jan 31 21:17:16.895: INFO: Got endpoints: latency-svc-twqkp [80.648775ms]
Jan 31 21:17:17.023: INFO: Created: latency-svc-7sgcs
Jan 31 21:17:17.085: INFO: Got endpoints: latency-svc-7sgcs [971.846341ms]
Jan 31 21:17:17.086: INFO: Created: latency-svc-gprw9
Jan 31 21:17:17.218: INFO: Got endpoints: latency-svc-gprw9 [1.062045952s]
Jan 31 21:17:17.222: INFO: Created: latency-svc-4kd9d
Jan 31 21:17:17.240: INFO: Got endpoints: latency-svc-4kd9d [1.0270506s]
Jan 31 21:17:17.464: INFO: Created: latency-svc-8dj8f
Jan 31 21:17:17.510: INFO: Got endpoints: latency-svc-8dj8f [1.264899225s]
Jan 31 21:17:17.559: INFO: Created: latency-svc-6p4f9
Jan 31 21:17:17.741: INFO: Got endpoints: latency-svc-6p4f9 [1.471038633s]
Jan 31 21:17:17.746: INFO: Created: latency-svc-x58jd
Jan 31 21:17:17.759: INFO: Got endpoints: latency-svc-x58jd [1.479954525s]
Jan 31 21:17:17.784: INFO: Created: latency-svc-2t78s
Jan 31 21:17:17.811: INFO: Got endpoints: latency-svc-2t78s [1.430560899s]
Jan 31 21:17:18.003: INFO: Created: latency-svc-682sm
Jan 31 21:17:18.086: INFO: Got endpoints: latency-svc-682sm [1.66603728s]
Jan 31 21:17:18.088: INFO: Created: latency-svc-4jkvc
Jan 31 21:17:18.236: INFO: Got endpoints: latency-svc-4jkvc [1.781566589s]
Jan 31 21:17:18.284: INFO: Created: latency-svc-clvwv
Jan 31 21:17:18.384: INFO: Got endpoints: latency-svc-clvwv [1.834778736s]
Jan 31 21:17:18.388: INFO: Created: latency-svc-t5txg
Jan 31 21:17:18.390: INFO: Got endpoints: latency-svc-t5txg [1.802005004s]
Jan 31 21:17:18.426: INFO: Created: latency-svc-c8bpq
Jan 31 21:17:18.437: INFO: Got endpoints: latency-svc-c8bpq [1.774315983s]
Jan 31 21:17:18.465: INFO: Created: latency-svc-gj9xj
Jan 31 21:17:18.472: INFO: Got endpoints: latency-svc-gj9xj [1.784787744s]
Jan 31 21:17:18.556: INFO: Created: latency-svc-79gpg
Jan 31 21:17:18.561: INFO: Got endpoints: latency-svc-79gpg [1.724283205s]
Jan 31 21:17:18.611: INFO: Created: latency-svc-fghfb
Jan 31 21:17:18.618: INFO: Got endpoints: latency-svc-fghfb [1.721743904s]
Jan 31 21:17:18.733: INFO: Created: latency-svc-2hlw6
Jan 31 21:17:18.739: INFO: Got endpoints: latency-svc-2hlw6 [1.65457112s]
Jan 31 21:17:18.739: INFO: Latencies: [71.844818ms 80.648775ms 200.704905ms 241.812336ms 353.866199ms 397.634142ms 426.120085ms 512.156058ms 558.059341ms 742.113942ms 744.933892ms 753.587709ms 758.889612ms 770.090992ms 802.736523ms 809.958565ms 816.941112ms 817.601802ms 820.579354ms 822.079637ms 823.838576ms 825.429472ms 827.331259ms 831.651489ms 833.243291ms 847.544345ms 852.610975ms 856.608979ms 862.978269ms 867.011715ms 867.539488ms 867.757098ms 872.971282ms 873.520181ms 875.492194ms 877.051756ms 881.380262ms 881.896038ms 882.510745ms 883.396286ms 884.827247ms 891.342934ms 892.759514ms 894.191226ms 895.514901ms 896.204932ms 898.508804ms 899.497964ms 902.041643ms 902.622092ms 903.414623ms 906.045457ms 906.381167ms 906.481207ms 907.186791ms 908.279418ms 908.956762ms 909.433424ms 910.190173ms 910.749913ms 912.581205ms 915.157077ms 916.16902ms 918.708466ms 921.031257ms 922.396809ms 922.421645ms 922.665926ms 922.983237ms 923.430637ms 924.078462ms 926.573816ms 928.086202ms 929.083927ms 929.745587ms 930.566716ms 933.101662ms 933.238503ms 934.202171ms 944.533523ms 945.44278ms 946.091441ms 948.783274ms 949.744875ms 950.387818ms 953.357017ms 955.140273ms 955.973541ms 957.518656ms 957.880128ms 959.050804ms 959.054926ms 963.829246ms 970.164642ms 971.846341ms 972.066186ms 972.778986ms 972.843028ms 973.038538ms 973.263496ms 974.804389ms 976.354975ms 976.539208ms 978.994123ms 980.558417ms 981.713787ms 982.574044ms 985.248582ms 987.771639ms 988.65444ms 993.053322ms 994.868647ms 996.116093ms 999.909235ms 1.000975563s 1.002644692s 1.007459809s 1.009625921s 1.010119386s 1.011578427s 1.01306149s 1.014940962s 1.025904144s 1.026144598s 1.026621152s 1.0270506s 1.028205397s 1.028588778s 1.029367049s 1.030387087s 1.030447405s 1.031261781s 1.035712257s 1.036618893s 1.036704714s 1.04422487s 1.045036783s 1.047252907s 1.049324744s 1.049450601s 1.052754522s 1.053267348s 1.05797271s 1.058312692s 1.062045952s 1.066295327s 1.073279042s 1.073436882s 1.074818204s 1.080253013s 1.081741256s 1.082731146s 1.095998757s 1.111692859s 1.111826918s 1.112546617s 1.114978945s 1.120180113s 1.132417953s 1.133327149s 1.139581267s 1.141499832s 1.141760625s 1.149175122s 1.16182487s 1.176680666s 1.185150514s 1.205877716s 1.216277279s 1.220401565s 1.242494524s 1.264899225s 1.294046829s 1.430560899s 1.471038633s 1.479954525s 1.607390772s 1.65457112s 1.658755777s 1.66603728s 1.700803017s 1.702537205s 1.721743904s 1.724283205s 1.724437127s 1.773473752s 1.774315983s 1.777837022s 1.780061629s 1.781119412s 1.781566589s 1.784787744s 1.800271641s 1.802005004s 1.806116967s 1.834778736s 1.841230652s 1.860075275s 1.867310166s 1.88148369s]
Jan 31 21:17:18.740: INFO: 50 %ile: 974.804389ms
Jan 31 21:17:18.740: INFO: 90 %ile: 1.700803017s
Jan 31 21:17:18.740: INFO: 99 %ile: 1.867310166s
Jan 31 21:17:18.740: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:17:18.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-2311" for this suite.

• [SLOW TEST:21.053 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":17,"skipped":330,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:17:18.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 31 21:17:18.927: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 21:17:18.968: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 21:17:18.971: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 21:17:18.978: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.979: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 21:17:18.979: INFO: svc-latency-rc-8mhxd from svc-latency-2311 started at 2020-01-31 21:16:58 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.979: INFO: 	Container svc-latency-rc ready: true, restart count 0
Jan 31 21:17:18.979: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 21:17:18.979: INFO: 	Container weave ready: true, restart count 1
Jan 31 21:17:18.979: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 21:17:18.979: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 21:17:18.999: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.999: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 21:17:18.999: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.999: INFO: 	Container etcd ready: true, restart count 1
Jan 31 21:17:18.999: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.999: INFO: 	Container coredns ready: true, restart count 0
Jan 31 21:17:18.999: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.999: INFO: 	Container coredns ready: true, restart count 0
Jan 31 21:17:18.999: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 21:17:18.999: INFO: 	Container weave ready: true, restart count 0
Jan 31 21:17:18.999: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 21:17:18.999: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.999: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 21:17:18.999: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.999: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 21:17:18.999: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 21:17:18.999: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan 31 21:17:19.283: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan 31 21:17:19.283: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
Jan 31 21:17:19.283: INFO: Pod svc-latency-rc-8mhxd requesting resource cpu=0m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Jan 31 21:17:19.283: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Jan 31 21:17:19.302: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18bf7627-eaac-476e-b434-cde4e563dd58.15ef1552bbac45a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1715/filler-pod-18bf7627-eaac-476e-b434-cde4e563dd58 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18bf7627-eaac-476e-b434-cde4e563dd58.15ef1553992429c6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18bf7627-eaac-476e-b434-cde4e563dd58.15ef1554530601bb], Reason = [Created], Message = [Created container filler-pod-18bf7627-eaac-476e-b434-cde4e563dd58]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18bf7627-eaac-476e-b434-cde4e563dd58.15ef1554a09b630b], Reason = [Started], Message = [Started container filler-pod-18bf7627-eaac-476e-b434-cde4e563dd58]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-93aee674-0e73-46db-87e7-bd177ae34eef.15ef1552be0a87d6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1715/filler-pod-93aee674-0e73-46db-87e7-bd177ae34eef to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-93aee674-0e73-46db-87e7-bd177ae34eef.15ef1553d0b68d86], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-93aee674-0e73-46db-87e7-bd177ae34eef.15ef1554b0730c16], Reason = [Created], Message = [Created container filler-pod-93aee674-0e73-46db-87e7-bd177ae34eef]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-93aee674-0e73-46db-87e7-bd177ae34eef.15ef1554de44b430], Reason = [Started], Message = [Started container filler-pod-93aee674-0e73-46db-87e7-bd177ae34eef]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ef1555160ff95e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ef15551827a4ce], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:17:30.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1715" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:11.830 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":18,"skipped":355,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:17:30.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 21:17:44.139: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:17:44.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4858" for this suite.

• [SLOW TEST:15.911 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":360,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:17:46.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:17:55.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3822" for this suite.

• [SLOW TEST:8.580 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":376,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:17:55.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jan 31 21:17:55.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5431'
Jan 31 21:17:57.927: INFO: stderr: ""
Jan 31 21:17:57.928: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 31 21:17:58.972: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:17:58.972: INFO: Found 0 / 1
Jan 31 21:17:59.936: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:17:59.936: INFO: Found 0 / 1
Jan 31 21:18:00.956: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:00.956: INFO: Found 0 / 1
Jan 31 21:18:01.956: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:01.957: INFO: Found 0 / 1
Jan 31 21:18:02.988: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:02.988: INFO: Found 0 / 1
Jan 31 21:18:04.016: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:04.016: INFO: Found 0 / 1
Jan 31 21:18:04.934: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:04.934: INFO: Found 0 / 1
Jan 31 21:18:05.934: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:05.934: INFO: Found 0 / 1
Jan 31 21:18:06.979: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:06.979: INFO: Found 1 / 1
Jan 31 21:18:06.979: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 31 21:18:06.983: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:06.983: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 31 21:18:06.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-6gwf4 --namespace=kubectl-5431 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 31 21:18:07.148: INFO: stderr: ""
Jan 31 21:18:07.148: INFO: stdout: "pod/agnhost-master-6gwf4 patched\n"
STEP: checking annotations
Jan 31 21:18:07.241: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:18:07.241: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:18:07.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5431" for this suite.

• [SLOW TEST:12.171 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":21,"skipped":383,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:18:07.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5184 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5184;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5184 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5184;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5184.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5184.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5184.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5184.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5184.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5184.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5184.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5184.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5184.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5184.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 168.176.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.176.168_udp@PTR;check="$$(dig +tcp +noall +answer +search 168.176.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.176.168_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5184 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5184;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5184 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5184;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5184.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5184.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5184.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5184.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5184.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5184.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5184.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5184.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5184.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5184.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5184.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 168.176.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.176.168_udp@PTR;check="$$(dig +tcp +noall +answer +search 168.176.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.176.168_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 21:18:19.590: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.594: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.598: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.602: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.606: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.609: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.613: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.616: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.648: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.655: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.662: INFO: Unable to read jessie_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.666: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.670: INFO: Unable to read jessie_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.674: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.678: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.681: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:19.702: INFO: Lookups using dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5184 wheezy_tcp@dns-test-service.dns-5184 wheezy_udp@dns-test-service.dns-5184.svc wheezy_tcp@dns-test-service.dns-5184.svc wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5184 jessie_tcp@dns-test-service.dns-5184 jessie_udp@dns-test-service.dns-5184.svc jessie_tcp@dns-test-service.dns-5184.svc jessie_udp@_http._tcp.dns-test-service.dns-5184.svc jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc]

Jan 31 21:18:24.767: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.777: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.854: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.861: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.865: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.868: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.871: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.875: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.907: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.914: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.923: INFO: Unable to read jessie_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.932: INFO: Unable to read jessie_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.936: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.939: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.943: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:24.971: INFO: Lookups using dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5184 wheezy_tcp@dns-test-service.dns-5184 wheezy_udp@dns-test-service.dns-5184.svc wheezy_tcp@dns-test-service.dns-5184.svc wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5184 jessie_tcp@dns-test-service.dns-5184 jessie_udp@dns-test-service.dns-5184.svc jessie_tcp@dns-test-service.dns-5184.svc jessie_udp@_http._tcp.dns-test-service.dns-5184.svc jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc]

Jan 31 21:18:29.719: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.727: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.744: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.751: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.763: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.770: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.776: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.846: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.857: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.867: INFO: Unable to read jessie_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.880: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.889: INFO: Unable to read jessie_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.896: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.909: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.917: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:29.969: INFO: Lookups using dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5184 wheezy_tcp@dns-test-service.dns-5184 wheezy_udp@dns-test-service.dns-5184.svc wheezy_tcp@dns-test-service.dns-5184.svc wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5184 jessie_tcp@dns-test-service.dns-5184 jessie_udp@dns-test-service.dns-5184.svc jessie_tcp@dns-test-service.dns-5184.svc jessie_udp@_http._tcp.dns-test-service.dns-5184.svc jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc]

Jan 31 21:18:34.719: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.762: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.776: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.792: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.801: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.805: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.812: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.816: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.917: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.930: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.936: INFO: Unable to read jessie_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.942: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.948: INFO: Unable to read jessie_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.951: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.956: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:34.985: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:35.029: INFO: Lookups using dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5184 wheezy_tcp@dns-test-service.dns-5184 wheezy_udp@dns-test-service.dns-5184.svc wheezy_tcp@dns-test-service.dns-5184.svc wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5184 jessie_tcp@dns-test-service.dns-5184 jessie_udp@dns-test-service.dns-5184.svc jessie_tcp@dns-test-service.dns-5184.svc jessie_udp@_http._tcp.dns-test-service.dns-5184.svc jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc]

Jan 31 21:18:39.713: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.720: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.724: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.730: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.736: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.741: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.747: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.751: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.792: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.796: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.801: INFO: Unable to read jessie_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.804: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.809: INFO: Unable to read jessie_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.858: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:39.879: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:40.005: INFO: Lookups using dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5184 wheezy_tcp@dns-test-service.dns-5184 wheezy_udp@dns-test-service.dns-5184.svc wheezy_tcp@dns-test-service.dns-5184.svc wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5184 jessie_tcp@dns-test-service.dns-5184 jessie_udp@dns-test-service.dns-5184.svc jessie_tcp@dns-test-service.dns-5184.svc jessie_udp@_http._tcp.dns-test-service.dns-5184.svc jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc]

Jan 31 21:18:44.714: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.719: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.726: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.729: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.734: INFO: Unable to read wheezy_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.738: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.742: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.745: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.793: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.815: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.825: INFO: Unable to read jessie_udp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.831: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184 from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.837: INFO: Unable to read jessie_udp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.870: INFO: Unable to read jessie_tcp@dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.876: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.881: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc from pod dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22: the server could not find the requested resource (get pods dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22)
Jan 31 21:18:44.901: INFO: Lookups using dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5184 wheezy_tcp@dns-test-service.dns-5184 wheezy_udp@dns-test-service.dns-5184.svc wheezy_tcp@dns-test-service.dns-5184.svc wheezy_udp@_http._tcp.dns-test-service.dns-5184.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5184.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5184 jessie_tcp@dns-test-service.dns-5184 jessie_udp@dns-test-service.dns-5184.svc jessie_tcp@dns-test-service.dns-5184.svc jessie_udp@_http._tcp.dns-test-service.dns-5184.svc jessie_tcp@_http._tcp.dns-test-service.dns-5184.svc]

Jan 31 21:18:49.946: INFO: DNS probes using dns-5184/dns-test-60e88786-7ba6-4140-950e-6de1b38a9c22 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:18:50.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5184" for this suite.

• [SLOW TEST:43.127 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":22,"skipped":404,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:18:50.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 21:18:50.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2384'
Jan 31 21:18:50.597: INFO: stderr: ""
Jan 31 21:18:50.598: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan 31 21:19:00.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2384 -o json'
Jan 31 21:19:00.806: INFO: stderr: ""
Jan 31 21:19:00.806: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-31T21:18:50Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2384\",\n        \"resourceVersion\": \"5595194\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2384/pods/e2e-test-httpd-pod\",\n        \"uid\": \"fc6c8584-e7dd-4cee-b362-d1df3c37469f\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-qn7j2\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-qn7j2\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-qn7j2\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T21:18:51Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T21:18:58Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T21:18:58Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T21:18:50Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://f0d521d534c427742d4cf2cd1f10b27519cdd66dfa94b7c7a4dfd9820cda13e7\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-31T21:18:57Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-31T21:18:51Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 31 21:19:00.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2384'
Jan 31 21:19:01.131: INFO: stderr: ""
Jan 31 21:19:01.131: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Jan 31 21:19:01.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2384'
Jan 31 21:19:05.401: INFO: stderr: ""
Jan 31 21:19:05.401: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:19:05.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2384" for this suite.

• [SLOW TEST:15.039 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":23,"skipped":436,"failed":0}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:19:05.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Jan 31 21:19:05.526: INFO: Waiting up to 5m0s for pod "var-expansion-eb979936-4305-4876-b242-d7d96107571d" in namespace "var-expansion-7801" to be "success or failure"
Jan 31 21:19:05.535: INFO: Pod "var-expansion-eb979936-4305-4876-b242-d7d96107571d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817848ms
Jan 31 21:19:07.542: INFO: Pod "var-expansion-eb979936-4305-4876-b242-d7d96107571d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016259334s
Jan 31 21:19:09.549: INFO: Pod "var-expansion-eb979936-4305-4876-b242-d7d96107571d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022571395s
Jan 31 21:19:11.601: INFO: Pod "var-expansion-eb979936-4305-4876-b242-d7d96107571d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07510752s
Jan 31 21:19:13.610: INFO: Pod "var-expansion-eb979936-4305-4876-b242-d7d96107571d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084129097s
STEP: Saw pod success
Jan 31 21:19:13.611: INFO: Pod "var-expansion-eb979936-4305-4876-b242-d7d96107571d" satisfied condition "success or failure"
Jan 31 21:19:13.616: INFO: Trying to get logs from node jerma-node pod var-expansion-eb979936-4305-4876-b242-d7d96107571d container dapi-container: 
STEP: delete the pod
Jan 31 21:19:13.689: INFO: Waiting for pod var-expansion-eb979936-4305-4876-b242-d7d96107571d to disappear
Jan 31 21:19:13.704: INFO: Pod var-expansion-eb979936-4305-4876-b242-d7d96107571d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:19:13.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7801" for this suite.

• [SLOW TEST:8.329 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":442,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:19:13.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 31 21:19:13.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 31 21:19:27.329: INFO: >>> kubeConfig: /root/.kube/config
Jan 31 21:19:29.441: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:19:41.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9665" for this suite.

• [SLOW TEST:27.957 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":25,"skipped":444,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:19:41.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:19:41.815: INFO: Creating deployment "test-recreate-deployment"
Jan 31 21:19:41.823: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 31 21:19:41.869: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 31 21:19:43.883: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 31 21:19:43.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102382, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:19:45.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102382, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:19:47.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102382, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102381, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:19:49.900: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 31 21:19:49.918: INFO: Updating deployment test-recreate-deployment
Jan 31 21:19:49.918: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 31 21:19:50.239: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-9056 /apis/apps/v1/namespaces/deployment-9056/deployments/test-recreate-deployment 8c89c6bc-810a-41ab-b9e8-94aeaa7b0831 5595433 2 2020-01-31 21:19:41 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00303dfb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-31 21:19:50 +0000 UTC,LastTransitionTime:2020-01-31 21:19:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-31 21:19:50 +0000 UTC,LastTransitionTime:2020-01-31 21:19:41 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jan 31 21:19:50.242: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-9056 /apis/apps/v1/namespaces/deployment-9056/replicasets/test-recreate-deployment-5f94c574ff b083be7f-27e3-4d3c-82cb-f64b18b33e42 5595432 1 2020-01-31 21:19:50 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 8c89c6bc-810a-41ab-b9e8-94aeaa7b0831 0xc0030152d7 0xc0030152d8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003015348  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 31 21:19:50.242: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 31 21:19:50.242: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-9056 /apis/apps/v1/namespaces/deployment-9056/replicasets/test-recreate-deployment-799c574856 aedc8470-3f79-4199-a17e-fb91f10d8519 5595422 2 2020-01-31 21:19:41 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 8c89c6bc-810a-41ab-b9e8-94aeaa7b0831 0xc0030153c7 0xc0030153c8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003015478  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 31 21:19:50.245: INFO: Pod "test-recreate-deployment-5f94c574ff-28d8p" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-28d8p test-recreate-deployment-5f94c574ff- deployment-9056 /api/v1/namespaces/deployment-9056/pods/test-recreate-deployment-5f94c574ff-28d8p a6e90d0a-0264-4480-b93e-00b200f9e54a 5595427 0 2020-01-31 21:19:50 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff b083be7f-27e3-4d3c-82cb-f64b18b33e42 0xc003015a17 0xc003015a18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thp87,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thp87,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thp87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 21:19:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:19:50.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9056" for this suite.

• [SLOW TEST:8.617 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":26,"skipped":458,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:19:50.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan 31 21:19:50.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:20:10.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2352" for this suite.

• [SLOW TEST:20.485 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":27,"skipped":458,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:20:10.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 31 21:20:10.865: INFO: Waiting up to 5m0s for pod "pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2" in namespace "emptydir-9894" to be "success or failure"
Jan 31 21:20:10.869: INFO: Pod "pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00081ms
Jan 31 21:20:12.883: INFO: Pod "pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017839797s
Jan 31 21:20:14.892: INFO: Pod "pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026906467s
Jan 31 21:20:16.896: INFO: Pod "pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031450478s
STEP: Saw pod success
Jan 31 21:20:16.896: INFO: Pod "pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2" satisfied condition "success or failure"
Jan 31 21:20:16.933: INFO: Trying to get logs from node jerma-node pod pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2 container test-container: 
STEP: delete the pod
Jan 31 21:20:17.001: INFO: Waiting for pod pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2 to disappear
Jan 31 21:20:17.006: INFO: Pod pod-3c246ad2-357e-447d-b4cb-0a6dd44343e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:20:17.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9894" for this suite.

• [SLOW TEST:6.206 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":458,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:20:17.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 31 21:20:17.363: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:20:29.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7392" for this suite.

• [SLOW TEST:12.126 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":29,"skipped":475,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:20:29.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-4e3b7535-5cf1-42f7-adbb-8a014e88d180
STEP: Creating a pod to test consume configMaps
Jan 31 21:20:29.218: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb" in namespace "projected-1992" to be "success or failure"
Jan 31 21:20:29.225: INFO: Pod "pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.122099ms
Jan 31 21:20:31.233: INFO: Pod "pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014698796s
Jan 31 21:20:33.241: INFO: Pod "pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022697239s
Jan 31 21:20:35.248: INFO: Pod "pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030156008s
Jan 31 21:20:37.255: INFO: Pod "pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036921221s
STEP: Saw pod success
Jan 31 21:20:37.255: INFO: Pod "pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb" satisfied condition "success or failure"
Jan 31 21:20:37.261: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 21:20:37.321: INFO: Waiting for pod pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb to disappear
Jan 31 21:20:37.351: INFO: Pod pod-projected-configmaps-9bac03d6-9400-4bf8-98b2-4f0a65ec41fb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:20:37.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1992" for this suite.

• [SLOW TEST:8.228 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":486,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:20:37.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7719.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7719.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7719.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7719.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 21:20:47.671: INFO: DNS probes using dns-7719/dns-test-4708fb4e-10ef-42fe-868a-1562e32a6858 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:20:47.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7719" for this suite.

• [SLOW TEST:10.490 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":31,"skipped":516,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:20:47.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jan 31 21:20:48.076: INFO: Waiting up to 5m0s for pod "client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4" in namespace "containers-253" to be "success or failure"
Jan 31 21:20:48.080: INFO: Pod "client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265727ms
Jan 31 21:20:50.086: INFO: Pod "client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010268065s
Jan 31 21:20:52.095: INFO: Pod "client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018482221s
Jan 31 21:20:54.099: INFO: Pod "client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023316078s
Jan 31 21:20:56.107: INFO: Pod "client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030677256s
Jan 31 21:20:58.111: INFO: Pod "client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.035142823s
STEP: Saw pod success
Jan 31 21:20:58.111: INFO: Pod "client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4" satisfied condition "success or failure"
Jan 31 21:20:58.116: INFO: Trying to get logs from node jerma-node pod client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4 container test-container: 
STEP: delete the pod
Jan 31 21:20:58.203: INFO: Waiting for pod client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4 to disappear
Jan 31 21:20:58.229: INFO: Pod client-containers-61217a96-08e6-4379-9cfe-5e1e9cd654c4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:20:58.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-253" for this suite.

• [SLOW TEST:10.372 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":528,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:20:58.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 21:20:58.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2" in namespace "projected-4468" to be "success or failure"
Jan 31 21:20:58.422: INFO: Pod "downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.072297ms
Jan 31 21:21:00.430: INFO: Pod "downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017205172s
Jan 31 21:21:02.497: INFO: Pod "downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084223981s
Jan 31 21:21:04.508: INFO: Pod "downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095396909s
Jan 31 21:21:06.519: INFO: Pod "downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105924342s
STEP: Saw pod success
Jan 31 21:21:06.519: INFO: Pod "downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2" satisfied condition "success or failure"
Jan 31 21:21:06.523: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2 container client-container: 
STEP: delete the pod
Jan 31 21:21:06.581: INFO: Waiting for pod downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2 to disappear
Jan 31 21:21:06.604: INFO: Pod downwardapi-volume-3571df5c-0849-4e43-84dd-e66a01069ed2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:21:06.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4468" for this suite.

• [SLOW TEST:8.391 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":562,"failed":0}
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:21:06.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:21:49.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2846" for this suite.

• [SLOW TEST:43.328 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":563,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:21:49.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 21:21:51.174: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 21:21:53.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:21:55.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:21:57.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:21:59.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102511, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 21:22:02.270: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:22:14.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5892" for this suite.
STEP: Destroying namespace "webhook-5892-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.785 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":35,"skipped":582,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:22:14.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-3636
STEP: creating replication controller nodeport-test in namespace services-3636
I0131 21:22:14.962916       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3636, replica count: 2
I0131 21:22:18.013928       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:22:21.014719       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:22:24.015125       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:22:27.015646       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 21:22:27.015: INFO: Creating new exec pod
Jan 31 21:22:34.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3636 execpod84dlk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 31 21:22:34.395: INFO: stderr: "I0131 21:22:34.206984     150 log.go:172] (0xc00095a000) (0xc000677cc0) Create stream\nI0131 21:22:34.207186     150 log.go:172] (0xc00095a000) (0xc000677cc0) Stream added, broadcasting: 1\nI0131 21:22:34.211828     150 log.go:172] (0xc00095a000) Reply frame received for 1\nI0131 21:22:34.211867     150 log.go:172] (0xc00095a000) (0xc000612780) Create stream\nI0131 21:22:34.211876     150 log.go:172] (0xc00095a000) (0xc000612780) Stream added, broadcasting: 3\nI0131 21:22:34.212844     150 log.go:172] (0xc00095a000) Reply frame received for 3\nI0131 21:22:34.212873     150 log.go:172] (0xc00095a000) (0xc0003d5540) Create stream\nI0131 21:22:34.212883     150 log.go:172] (0xc00095a000) (0xc0003d5540) Stream added, broadcasting: 5\nI0131 21:22:34.213903     150 log.go:172] (0xc00095a000) Reply frame received for 5\nI0131 21:22:34.296382     150 log.go:172] (0xc00095a000) Data frame received for 5\nI0131 21:22:34.296468     150 log.go:172] (0xc0003d5540) (5) Data frame handling\nI0131 21:22:34.296493     150 log.go:172] (0xc0003d5540) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0131 21:22:34.312325     150 log.go:172] (0xc00095a000) Data frame received for 5\nI0131 21:22:34.312492     150 log.go:172] (0xc0003d5540) (5) Data frame handling\nI0131 21:22:34.312525     150 log.go:172] (0xc0003d5540) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0131 21:22:34.388999     150 log.go:172] (0xc00095a000) Data frame received for 1\nI0131 21:22:34.389101     150 log.go:172] (0xc00095a000) (0xc0003d5540) Stream removed, broadcasting: 5\nI0131 21:22:34.389209     150 log.go:172] (0xc000677cc0) (1) Data frame handling\nI0131 21:22:34.389243     150 log.go:172] (0xc000677cc0) (1) Data frame sent\nI0131 21:22:34.389283     150 log.go:172] (0xc00095a000) (0xc000612780) Stream removed, broadcasting: 3\nI0131 21:22:34.389362     150 log.go:172] (0xc00095a000) (0xc000677cc0) Stream removed, broadcasting: 1\nI0131 21:22:34.390290     150 log.go:172] (0xc00095a000) Go away received\nI0131 21:22:34.390410     150 log.go:172] (0xc00095a000) (0xc000677cc0) Stream removed, broadcasting: 1\nI0131 21:22:34.390454     150 log.go:172] (0xc00095a000) (0xc000612780) Stream removed, broadcasting: 3\nI0131 21:22:34.390459     150 log.go:172] (0xc00095a000) (0xc0003d5540) Stream removed, broadcasting: 5\n"
Jan 31 21:22:34.396: INFO: stdout: ""
Jan 31 21:22:34.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3636 execpod84dlk -- /bin/sh -x -c nc -zv -t -w 2 10.96.226.78 80'
Jan 31 21:22:34.780: INFO: stderr: "I0131 21:22:34.591066     167 log.go:172] (0xc000a98d10) (0xc0009b2280) Create stream\nI0131 21:22:34.591312     167 log.go:172] (0xc000a98d10) (0xc0009b2280) Stream added, broadcasting: 1\nI0131 21:22:34.597000     167 log.go:172] (0xc000a98d10) Reply frame received for 1\nI0131 21:22:34.597065     167 log.go:172] (0xc000a98d10) (0xc000a720a0) Create stream\nI0131 21:22:34.597077     167 log.go:172] (0xc000a98d10) (0xc000a720a0) Stream added, broadcasting: 3\nI0131 21:22:34.598311     167 log.go:172] (0xc000a98d10) Reply frame received for 3\nI0131 21:22:34.598337     167 log.go:172] (0xc000a98d10) (0xc0009b2320) Create stream\nI0131 21:22:34.598345     167 log.go:172] (0xc000a98d10) (0xc0009b2320) Stream added, broadcasting: 5\nI0131 21:22:34.602319     167 log.go:172] (0xc000a98d10) Reply frame received for 5\nI0131 21:22:34.707842     167 log.go:172] (0xc000a98d10) Data frame received for 5\nI0131 21:22:34.708228     167 log.go:172] (0xc0009b2320) (5) Data frame handling\nI0131 21:22:34.708297     167 log.go:172] (0xc0009b2320) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.226.78 80\nConnection to 10.96.226.78 80 port [tcp/http] succeeded!\nI0131 21:22:34.770393     167 log.go:172] (0xc000a98d10) (0xc0009b2320) Stream removed, broadcasting: 5\nI0131 21:22:34.770763     167 log.go:172] (0xc000a98d10) Data frame received for 1\nI0131 21:22:34.770839     167 log.go:172] (0xc000a98d10) (0xc000a720a0) Stream removed, broadcasting: 3\nI0131 21:22:34.771154     167 log.go:172] (0xc0009b2280) (1) Data frame handling\nI0131 21:22:34.771289     167 log.go:172] (0xc0009b2280) (1) Data frame sent\nI0131 21:22:34.771358     167 log.go:172] (0xc000a98d10) (0xc0009b2280) Stream removed, broadcasting: 1\nI0131 21:22:34.771451     167 log.go:172] (0xc000a98d10) Go away received\nI0131 21:22:34.772775     167 log.go:172] (0xc000a98d10) (0xc0009b2280) Stream removed, broadcasting: 1\nI0131 21:22:34.772800     167 log.go:172] (0xc000a98d10) (0xc000a720a0) Stream removed, broadcasting: 3\nI0131 21:22:34.772805     167 log.go:172] (0xc000a98d10) (0xc0009b2320) Stream removed, broadcasting: 5\n"
Jan 31 21:22:34.780: INFO: stdout: ""
Jan 31 21:22:34.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3636 execpod84dlk -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30566'
Jan 31 21:22:35.147: INFO: stderr: "I0131 21:22:34.958774     187 log.go:172] (0xc0009e8dc0) (0xc0009c05a0) Create stream\nI0131 21:22:34.958939     187 log.go:172] (0xc0009e8dc0) (0xc0009c05a0) Stream added, broadcasting: 1\nI0131 21:22:34.973631     187 log.go:172] (0xc0009e8dc0) Reply frame received for 1\nI0131 21:22:34.973664     187 log.go:172] (0xc0009e8dc0) (0xc000688820) Create stream\nI0131 21:22:34.973673     187 log.go:172] (0xc0009e8dc0) (0xc000688820) Stream added, broadcasting: 3\nI0131 21:22:34.974784     187 log.go:172] (0xc0009e8dc0) Reply frame received for 3\nI0131 21:22:34.974821     187 log.go:172] (0xc0009e8dc0) (0xc0002f55e0) Create stream\nI0131 21:22:34.974832     187 log.go:172] (0xc0009e8dc0) (0xc0002f55e0) Stream added, broadcasting: 5\nI0131 21:22:34.978530     187 log.go:172] (0xc0009e8dc0) Reply frame received for 5\nI0131 21:22:35.051713     187 log.go:172] (0xc0009e8dc0) Data frame received for 5\nI0131 21:22:35.051802     187 log.go:172] (0xc0002f55e0) (5) Data frame handling\nI0131 21:22:35.051828     187 log.go:172] (0xc0002f55e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30566\nI0131 21:22:35.055819     187 log.go:172] (0xc0009e8dc0) Data frame received for 5\nI0131 21:22:35.055881     187 log.go:172] (0xc0002f55e0) (5) Data frame handling\nI0131 21:22:35.055914     187 log.go:172] (0xc0002f55e0) (5) Data frame sent\nConnection to 10.96.2.250 30566 port [tcp/30566] succeeded!\nI0131 21:22:35.134839     187 log.go:172] (0xc0009e8dc0) (0xc000688820) Stream removed, broadcasting: 3\nI0131 21:22:35.135083     187 log.go:172] (0xc0009e8dc0) Data frame received for 1\nI0131 21:22:35.135097     187 log.go:172] (0xc0009c05a0) (1) Data frame handling\nI0131 21:22:35.135115     187 log.go:172] (0xc0009c05a0) (1) Data frame sent\nI0131 21:22:35.135128     187 log.go:172] (0xc0009e8dc0) (0xc0009c05a0) Stream removed, broadcasting: 1\nI0131 21:22:35.135266     187 log.go:172] (0xc0009e8dc0) (0xc0002f55e0) Stream removed, broadcasting: 5\nI0131 21:22:35.135337     187 log.go:172] (0xc0009e8dc0) Go away received\nI0131 21:22:35.136016     187 log.go:172] (0xc0009e8dc0) (0xc0009c05a0) Stream removed, broadcasting: 1\nI0131 21:22:35.136027     187 log.go:172] (0xc0009e8dc0) (0xc000688820) Stream removed, broadcasting: 3\nI0131 21:22:35.136034     187 log.go:172] (0xc0009e8dc0) (0xc0002f55e0) Stream removed, broadcasting: 5\n"
Jan 31 21:22:35.147: INFO: stdout: ""
Jan 31 21:22:35.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3636 execpod84dlk -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30566'
Jan 31 21:22:35.485: INFO: stderr: "I0131 21:22:35.341536     208 log.go:172] (0xc0009c0bb0) (0xc000a58280) Create stream\nI0131 21:22:35.341760     208 log.go:172] (0xc0009c0bb0) (0xc000a58280) Stream added, broadcasting: 1\nI0131 21:22:35.347468     208 log.go:172] (0xc0009c0bb0) Reply frame received for 1\nI0131 21:22:35.347511     208 log.go:172] (0xc0009c0bb0) (0xc000a900a0) Create stream\nI0131 21:22:35.347525     208 log.go:172] (0xc0009c0bb0) (0xc000a900a0) Stream added, broadcasting: 3\nI0131 21:22:35.348860     208 log.go:172] (0xc0009c0bb0) Reply frame received for 3\nI0131 21:22:35.348922     208 log.go:172] (0xc0009c0bb0) (0xc000a58320) Create stream\nI0131 21:22:35.348933     208 log.go:172] (0xc0009c0bb0) (0xc000a58320) Stream added, broadcasting: 5\nI0131 21:22:35.350391     208 log.go:172] (0xc0009c0bb0) Reply frame received for 5\nI0131 21:22:35.406406     208 log.go:172] (0xc0009c0bb0) Data frame received for 5\nI0131 21:22:35.406478     208 log.go:172] (0xc000a58320) (5) Data frame handling\nI0131 21:22:35.406500     208 log.go:172] (0xc000a58320) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30566\nI0131 21:22:35.410111     208 log.go:172] (0xc0009c0bb0) Data frame received for 5\nI0131 21:22:35.410125     208 log.go:172] (0xc000a58320) (5) Data frame handling\nI0131 21:22:35.410141     208 log.go:172] (0xc000a58320) (5) Data frame sent\nConnection to 10.96.1.234 30566 port [tcp/30566] succeeded!\nI0131 21:22:35.474830     208 log.go:172] (0xc0009c0bb0) (0xc000a900a0) Stream removed, broadcasting: 3\nI0131 21:22:35.475389     208 log.go:172] (0xc0009c0bb0) Data frame received for 1\nI0131 21:22:35.475423     208 log.go:172] (0xc000a58280) (1) Data frame handling\nI0131 21:22:35.475466     208 log.go:172] (0xc000a58280) (1) Data frame sent\nI0131 21:22:35.475491     208 log.go:172] (0xc0009c0bb0) (0xc000a58280) Stream removed, broadcasting: 1\nI0131 21:22:35.475828     208 log.go:172] (0xc0009c0bb0) (0xc000a58320) Stream removed, broadcasting: 5\nI0131 21:22:35.476019     208 log.go:172] (0xc0009c0bb0) Go away received\nI0131 21:22:35.476752     208 log.go:172] (0xc0009c0bb0) (0xc000a58280) Stream removed, broadcasting: 1\nI0131 21:22:35.476774     208 log.go:172] (0xc0009c0bb0) (0xc000a900a0) Stream removed, broadcasting: 3\nI0131 21:22:35.476784     208 log.go:172] (0xc0009c0bb0) (0xc000a58320) Stream removed, broadcasting: 5\n"
Jan 31 21:22:35.485: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:22:35.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3636" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:20.748 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":36,"skipped":604,"failed":0}
SS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:22:35.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-2681
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2681 to expose endpoints map[]
Jan 31 21:22:35.682: INFO: successfully validated that service endpoint-test2 in namespace services-2681 exposes endpoints map[] (13.694338ms elapsed)
STEP: Creating pod pod1 in namespace services-2681
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2681 to expose endpoints map[pod1:[80]]
Jan 31 21:22:39.963: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.259008917s elapsed, will retry)
Jan 31 21:22:46.763: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (11.058132595s elapsed, will retry)
Jan 31 21:22:47.774: INFO: successfully validated that service endpoint-test2 in namespace services-2681 exposes endpoints map[pod1:[80]] (12.069326449s elapsed)
STEP: Creating pod pod2 in namespace services-2681
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2681 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 31 21:22:52.210: INFO: Unexpected endpoints: found map[d7906209-d016-46eb-99ad-a4b3bb4ebbe5:[80]], expected map[pod1:[80] pod2:[80]] (4.409960194s elapsed, will retry)
Jan 31 21:22:55.246: INFO: successfully validated that service endpoint-test2 in namespace services-2681 exposes endpoints map[pod1:[80] pod2:[80]] (7.445441875s elapsed)
STEP: Deleting pod pod1 in namespace services-2681
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2681 to expose endpoints map[pod2:[80]]
Jan 31 21:22:55.332: INFO: successfully validated that service endpoint-test2 in namespace services-2681 exposes endpoints map[pod2:[80]] (81.99276ms elapsed)
STEP: Deleting pod pod2 in namespace services-2681
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2681 to expose endpoints map[]
Jan 31 21:22:55.377: INFO: successfully validated that service endpoint-test2 in namespace services-2681 exposes endpoints map[] (12.755525ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:22:55.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2681" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.959 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":37,"skipped":606,"failed":0}
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:22:55.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:23:05.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2427" for this suite.

• [SLOW TEST:10.558 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":38,"skipped":606,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:23:06.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 21:23:06.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18" in namespace "downward-api-7847" to be "success or failure"
Jan 31 21:23:06.159: INFO: Pod "downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18": Phase="Pending", Reason="", readiness=false. Elapsed: 13.813672ms
Jan 31 21:23:08.164: INFO: Pod "downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019258577s
Jan 31 21:23:10.177: INFO: Pod "downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031896045s
Jan 31 21:23:12.183: INFO: Pod "downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037953059s
Jan 31 21:23:14.189: INFO: Pod "downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043459267s
STEP: Saw pod success
Jan 31 21:23:14.189: INFO: Pod "downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18" satisfied condition "success or failure"
Jan 31 21:23:14.192: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18 container client-container: 
STEP: delete the pod
Jan 31 21:23:14.246: INFO: Waiting for pod downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18 to disappear
Jan 31 21:23:14.267: INFO: Pod downwardapi-volume-eabbb6ea-9210-4eae-8ab9-ab47062c8a18 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:23:14.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7847" for this suite.

• [SLOW TEST:8.266 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":611,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:23:14.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 21:23:14.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3806'
Jan 31 21:23:14.664: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 21:23:14.664: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 31 21:23:14.691: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-8c7jp]
Jan 31 21:23:14.691: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-8c7jp" in namespace "kubectl-3806" to be "running and ready"
Jan 31 21:23:14.694: INFO: Pod "e2e-test-httpd-rc-8c7jp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608055ms
Jan 31 21:23:16.707: INFO: Pod "e2e-test-httpd-rc-8c7jp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015966013s
Jan 31 21:23:18.714: INFO: Pod "e2e-test-httpd-rc-8c7jp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023239381s
Jan 31 21:23:20.721: INFO: Pod "e2e-test-httpd-rc-8c7jp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029751938s
Jan 31 21:23:22.735: INFO: Pod "e2e-test-httpd-rc-8c7jp": Phase="Running", Reason="", readiness=true. Elapsed: 8.043673192s
Jan 31 21:23:22.735: INFO: Pod "e2e-test-httpd-rc-8c7jp" satisfied condition "running and ready"
Jan 31 21:23:22.735: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-8c7jp]
Jan 31 21:23:22.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3806'
Jan 31 21:23:22.887: INFO: stderr: ""
Jan 31 21:23:22.888: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Fri Jan 31 21:23:20.753393 2020] [mpm_event:notice] [pid 1:tid 140580607576936] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Jan 31 21:23:20.753446 2020] [core:notice] [pid 1:tid 140580607576936] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 31 21:23:22.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3806'
Jan 31 21:23:23.034: INFO: stderr: ""
Jan 31 21:23:23.034: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:23:23.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3806" for this suite.

• [SLOW TEST:8.789 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":40,"skipped":642,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:23:23.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9148
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 21:23:23.122: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 21:24:01.302: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9148 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 21:24:01.303: INFO: >>> kubeConfig: /root/.kube/config
I0131 21:24:01.335147       8 log.go:172] (0xc0026d2630) (0xc002893040) Create stream
I0131 21:24:01.335236       8 log.go:172] (0xc0026d2630) (0xc002893040) Stream added, broadcasting: 1
I0131 21:24:01.338848       8 log.go:172] (0xc0026d2630) Reply frame received for 1
I0131 21:24:01.338879       8 log.go:172] (0xc0026d2630) (0xc000544c80) Create stream
I0131 21:24:01.338887       8 log.go:172] (0xc0026d2630) (0xc000544c80) Stream added, broadcasting: 3
I0131 21:24:01.339973       8 log.go:172] (0xc0026d2630) Reply frame received for 3
I0131 21:24:01.339991       8 log.go:172] (0xc0026d2630) (0xc001384e60) Create stream
I0131 21:24:01.339999       8 log.go:172] (0xc0026d2630) (0xc001384e60) Stream added, broadcasting: 5
I0131 21:24:01.341207       8 log.go:172] (0xc0026d2630) Reply frame received for 5
I0131 21:24:01.422836       8 log.go:172] (0xc0026d2630) Data frame received for 3
I0131 21:24:01.423040       8 log.go:172] (0xc000544c80) (3) Data frame handling
I0131 21:24:01.423057       8 log.go:172] (0xc000544c80) (3) Data frame sent
I0131 21:24:01.497585       8 log.go:172] (0xc0026d2630) Data frame received for 1
I0131 21:24:01.497685       8 log.go:172] (0xc002893040) (1) Data frame handling
I0131 21:24:01.497730       8 log.go:172] (0xc002893040) (1) Data frame sent
I0131 21:24:01.497982       8 log.go:172] (0xc0026d2630) (0xc002893040) Stream removed, broadcasting: 1
I0131 21:24:01.498855       8 log.go:172] (0xc0026d2630) (0xc000544c80) Stream removed, broadcasting: 3
I0131 21:24:01.498974       8 log.go:172] (0xc0026d2630) (0xc001384e60) Stream removed, broadcasting: 5
I0131 21:24:01.499103       8 log.go:172] (0xc0026d2630) (0xc002893040) Stream removed, broadcasting: 1
I0131 21:24:01.499126       8 log.go:172] (0xc0026d2630) (0xc000544c80) Stream removed, broadcasting: 3
I0131 21:24:01.499145       8 log.go:172] (0xc0026d2630) (0xc001384e60) Stream removed, broadcasting: 5
Jan 31 21:24:01.499: INFO: Found all expected endpoints: [netserver-0]
I0131 21:24:01.499651       8 log.go:172] (0xc0026d2630) Go away received
Jan 31 21:24:01.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9148 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 21:24:01.510: INFO: >>> kubeConfig: /root/.kube/config
I0131 21:24:01.543513       8 log.go:172] (0xc002c72370) (0xc000545c20) Create stream
I0131 21:24:01.543569       8 log.go:172] (0xc002c72370) (0xc000545c20) Stream added, broadcasting: 1
I0131 21:24:01.549234       8 log.go:172] (0xc002c72370) Reply frame received for 1
I0131 21:24:01.549323       8 log.go:172] (0xc002c72370) (0xc001384fa0) Create stream
I0131 21:24:01.549338       8 log.go:172] (0xc002c72370) (0xc001384fa0) Stream added, broadcasting: 3
I0131 21:24:01.550744       8 log.go:172] (0xc002c72370) Reply frame received for 3
I0131 21:24:01.550763       8 log.go:172] (0xc002c72370) (0xc000545d60) Create stream
I0131 21:24:01.550843       8 log.go:172] (0xc002c72370) (0xc000545d60) Stream added, broadcasting: 5
I0131 21:24:01.551981       8 log.go:172] (0xc002c72370) Reply frame received for 5
I0131 21:24:01.626116       8 log.go:172] (0xc002c72370) Data frame received for 3
I0131 21:24:01.626180       8 log.go:172] (0xc001384fa0) (3) Data frame handling
I0131 21:24:01.626205       8 log.go:172] (0xc001384fa0) (3) Data frame sent
I0131 21:24:01.687789       8 log.go:172] (0xc002c72370) (0xc001384fa0) Stream removed, broadcasting: 3
I0131 21:24:01.687893       8 log.go:172] (0xc002c72370) Data frame received for 1
I0131 21:24:01.687905       8 log.go:172] (0xc000545c20) (1) Data frame handling
I0131 21:24:01.687916       8 log.go:172] (0xc000545c20) (1) Data frame sent
I0131 21:24:01.687926       8 log.go:172] (0xc002c72370) (0xc000545c20) Stream removed, broadcasting: 1
I0131 21:24:01.688031       8 log.go:172] (0xc002c72370) (0xc000545d60) Stream removed, broadcasting: 5
I0131 21:24:01.688116       8 log.go:172] (0xc002c72370) (0xc000545c20) Stream removed, broadcasting: 1
I0131 21:24:01.688151       8 log.go:172] (0xc002c72370) Go away received
I0131 21:24:01.688177       8 log.go:172] (0xc002c72370) (0xc001384fa0) Stream removed, broadcasting: 3
I0131 21:24:01.688197       8 log.go:172] (0xc002c72370) (0xc000545d60) Stream removed, broadcasting: 5
Jan 31 21:24:01.688: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:24:01.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9148" for this suite.

• [SLOW TEST:38.623 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":662,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:24:01.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Jan 31 21:24:01.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 31 21:24:01.917: INFO: stderr: ""
Jan 31 21:24:01.918: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:24:01.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1570" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":42,"skipped":684,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:24:01.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-42f73e72-4507-4a38-a659-10abce78ae58
STEP: Creating a pod to test consume configMaps
Jan 31 21:24:02.075: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a" in namespace "projected-2468" to be "success or failure"
Jan 31 21:24:02.096: INFO: Pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.233596ms
Jan 31 21:24:04.225: INFO: Pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149507272s
Jan 31 21:24:06.234: INFO: Pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158489424s
Jan 31 21:24:08.267: INFO: Pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191402917s
Jan 31 21:24:10.688: INFO: Pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612840821s
Jan 31 21:24:12.698: INFO: Pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.622316179s
Jan 31 21:24:14.708: INFO: Pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.632505373s
STEP: Saw pod success
Jan 31 21:24:14.708: INFO: Pod "pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a" satisfied condition "success or failure"
Jan 31 21:24:14.714: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 21:24:14.911: INFO: Waiting for pod pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a to disappear
Jan 31 21:24:14.917: INFO: Pod pod-projected-configmaps-44d7f22f-02ed-4076-bbca-430841b01e6a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:24:14.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2468" for this suite.

• [SLOW TEST:13.006 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":686,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:24:14.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 31 21:24:15.621: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 31 21:24:17.637: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:24:19.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:24:21.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102655, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 21:24:24.693: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:24:24.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:24:26.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8441" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.415 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":44,"skipped":698,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:24:26.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 31 21:24:26.448: INFO: Waiting up to 5m0s for pod "pod-c6305fda-2ae3-457a-b04e-2305a2f8876f" in namespace "emptydir-9599" to be "success or failure"
Jan 31 21:24:26.491: INFO: Pod "pod-c6305fda-2ae3-457a-b04e-2305a2f8876f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.765776ms
Jan 31 21:24:28.501: INFO: Pod "pod-c6305fda-2ae3-457a-b04e-2305a2f8876f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05219843s
Jan 31 21:24:30.518: INFO: Pod "pod-c6305fda-2ae3-457a-b04e-2305a2f8876f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069637408s
Jan 31 21:24:32.526: INFO: Pod "pod-c6305fda-2ae3-457a-b04e-2305a2f8876f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078068816s
Jan 31 21:24:34.533: INFO: Pod "pod-c6305fda-2ae3-457a-b04e-2305a2f8876f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085033794s
Jan 31 21:24:36.546: INFO: Pod "pod-c6305fda-2ae3-457a-b04e-2305a2f8876f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098151383s
STEP: Saw pod success
Jan 31 21:24:36.547: INFO: Pod "pod-c6305fda-2ae3-457a-b04e-2305a2f8876f" satisfied condition "success or failure"
Jan 31 21:24:36.550: INFO: Trying to get logs from node jerma-node pod pod-c6305fda-2ae3-457a-b04e-2305a2f8876f container test-container: 
STEP: delete the pod
Jan 31 21:24:36.598: INFO: Waiting for pod pod-c6305fda-2ae3-457a-b04e-2305a2f8876f to disappear
Jan 31 21:24:36.602: INFO: Pod pod-c6305fda-2ae3-457a-b04e-2305a2f8876f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:24:36.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9599" for this suite.

• [SLOW TEST:10.278 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":699,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:24:36.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 21:24:36.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14" in namespace "downward-api-7363" to be "success or failure"
Jan 31 21:24:36.787: INFO: Pod "downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14": Phase="Pending", Reason="", readiness=false. Elapsed: 16.933213ms
Jan 31 21:24:38.793: INFO: Pod "downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023385617s
Jan 31 21:24:40.802: INFO: Pod "downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03162437s
Jan 31 21:24:42.806: INFO: Pod "downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036210738s
Jan 31 21:24:44.815: INFO: Pod "downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044610764s
STEP: Saw pod success
Jan 31 21:24:44.815: INFO: Pod "downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14" satisfied condition "success or failure"
Jan 31 21:24:44.820: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14 container client-container: 
STEP: delete the pod
Jan 31 21:24:44.862: INFO: Waiting for pod downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14 to disappear
Jan 31 21:24:44.867: INFO: Pod downwardapi-volume-4fbca304-8a43-4a85-9569-f9d079344f14 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:24:44.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7363" for this suite.

• [SLOW TEST:8.250 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":730,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:24:44.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Jan 31 21:24:44.999: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:24:45.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6401" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":47,"skipped":738,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:24:45.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-d4549d5d-d0a5-4f9e-91bf-229c94fd92e9
STEP: Creating a pod to test consume configMaps
Jan 31 21:24:45.320: INFO: Waiting up to 5m0s for pod "pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3" in namespace "configmap-6173" to be "success or failure"
Jan 31 21:24:45.364: INFO: Pod "pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3": Phase="Pending", Reason="", readiness=false. Elapsed: 43.880595ms
Jan 31 21:24:47.377: INFO: Pod "pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056098142s
Jan 31 21:24:49.383: INFO: Pod "pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062235849s
Jan 31 21:24:51.389: INFO: Pod "pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068763153s
Jan 31 21:24:53.396: INFO: Pod "pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07575433s
STEP: Saw pod success
Jan 31 21:24:53.396: INFO: Pod "pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3" satisfied condition "success or failure"
Jan 31 21:24:53.401: INFO: Trying to get logs from node jerma-node pod pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3 container configmap-volume-test: 
STEP: delete the pod
Jan 31 21:24:53.455: INFO: Waiting for pod pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3 to disappear
Jan 31 21:24:53.461: INFO: Pod pod-configmaps-95352d71-034b-4335-b424-9eb5f1d2dcb3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:24:53.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6173" for this suite.

• [SLOW TEST:8.315 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:24:53.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 21:24:54.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 21:24:56.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:24:58.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:25:00.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716102694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 21:25:03.643: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:25:03.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7655" for this suite.
STEP: Destroying namespace "webhook-7655-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.383 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":49,"skipped":784,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:25:04.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-8489
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 21:25:05.133: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8489" in namespace "subpath-153" to be "success or failure"
Jan 31 21:25:05.195: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Pending", Reason="", readiness=false. Elapsed: 62.071886ms
Jan 31 21:25:07.202: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068770249s
Jan 31 21:25:09.211: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07814838s
Jan 31 21:25:11.217: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083994915s
Jan 31 21:25:13.224: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 8.090582415s
Jan 31 21:25:15.230: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 10.097084977s
Jan 31 21:25:17.237: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 12.10424515s
Jan 31 21:25:19.245: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 14.112246521s
Jan 31 21:25:21.252: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 16.119042351s
Jan 31 21:25:23.261: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 18.128295341s
Jan 31 21:25:25.268: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 20.134931655s
Jan 31 21:25:27.275: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 22.142391879s
Jan 31 21:25:29.281: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 24.148526677s
Jan 31 21:25:31.292: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Running", Reason="", readiness=true. Elapsed: 26.159004497s
Jan 31 21:25:33.301: INFO: Pod "pod-subpath-test-downwardapi-8489": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.167560306s
STEP: Saw pod success
Jan 31 21:25:33.301: INFO: Pod "pod-subpath-test-downwardapi-8489" satisfied condition "success or failure"
Jan 31 21:25:33.308: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-8489 container test-container-subpath-downwardapi-8489: 
STEP: delete the pod
Jan 31 21:25:33.375: INFO: Waiting for pod pod-subpath-test-downwardapi-8489 to disappear
Jan 31 21:25:33.388: INFO: Pod pod-subpath-test-downwardapi-8489 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-8489
Jan 31 21:25:33.388: INFO: Deleting pod "pod-subpath-test-downwardapi-8489" in namespace "subpath-153"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:25:33.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-153" for this suite.

• [SLOW TEST:28.589 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":50,"skipped":810,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:25:33.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 21:25:33.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5450'
Jan 31 21:25:33.649: INFO: stderr: ""
Jan 31 21:25:33.649: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Jan 31 21:25:33.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5450'
Jan 31 21:25:40.723: INFO: stderr: ""
Jan 31 21:25:40.723: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:25:40.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5450" for this suite.

• [SLOW TEST:7.286 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":51,"skipped":833,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:25:40.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:25:52.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1655" for this suite.

• [SLOW TEST:11.294 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":52,"skipped":833,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:25:52.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 31 21:26:08.246: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:08.250: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 21:26:10.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:10.258: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 21:26:12.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:12.256: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 21:26:14.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:14.260: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 21:26:16.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:16.259: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 21:26:18.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:18.263: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 21:26:20.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:20.260: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 21:26:22.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:22.255: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 21:26:24.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 21:26:24.260: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:26:24.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6272" for this suite.

• [SLOW TEST:32.258 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":852,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:26:24.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 31 21:26:31.227: INFO: 0 pods remaining
Jan 31 21:26:31.227: INFO: 0 pods has nil DeletionTimestamp
Jan 31 21:26:31.227: INFO: 
STEP: Gathering metrics
W0131 21:26:31.620609       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 21:26:31.620: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:26:31.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4802" for this suite.

• [SLOW TEST:7.727 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":54,"skipped":867,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:26:32.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 31 21:26:32.508: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 21:26:32.778: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 21:26:32.853: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 21:26:32.877: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.877: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 21:26:32.877: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 21:26:32.877: INFO: 	Container weave ready: true, restart count 1
Jan 31 21:26:32.877: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 21:26:32.877: INFO: pod-handle-http-request from container-lifecycle-hook-6272 started at 2020-01-31 21:25:52 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.877: INFO: 	Container pod-handle-http-request ready: true, restart count 0
Jan 31 21:26:32.877: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 21:26:32.932: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.933: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 21:26:32.933: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 21:26:32.933: INFO: 	Container weave ready: true, restart count 0
Jan 31 21:26:32.933: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 21:26:32.933: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.933: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 21:26:32.933: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.933: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 31 21:26:32.933: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.933: INFO: 	Container etcd ready: true, restart count 1
Jan 31 21:26:32.933: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.933: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 21:26:32.933: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.933: INFO: 	Container coredns ready: true, restart count 0
Jan 31 21:26:32.933: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 21:26:32.933: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-7db795d3-5dcf-4d89-9d20-e108b59267a9 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-7db795d3-5dcf-4d89-9d20-e108b59267a9 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-7db795d3-5dcf-4d89-9d20-e108b59267a9
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:31:55.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6887" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:323.893 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":55,"skipped":889,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:31:55.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:32:03.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4760" for this suite.

• [SLOW TEST:7.142 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":56,"skipped":900,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:32:03.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:32:03.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 31 21:32:03.422: INFO: stderr: ""
Jan 31 21:32:03.422: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:32:03.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-605" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":57,"skipped":931,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:32:03.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-6aec8962-7c48-4f3f-a550-56c2dc046715
STEP: Creating a pod to test consume configMaps
Jan 31 21:32:03.735: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb" in namespace "configmap-9948" to be "success or failure"
Jan 31 21:32:03.878: INFO: Pod "pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb": Phase="Pending", Reason="", readiness=false. Elapsed: 142.385108ms
Jan 31 21:32:05.886: INFO: Pod "pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1505078s
Jan 31 21:32:07.903: INFO: Pod "pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167875632s
Jan 31 21:32:09.912: INFO: Pod "pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177002834s
Jan 31 21:32:11.920: INFO: Pod "pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184312186s
STEP: Saw pod success
Jan 31 21:32:11.920: INFO: Pod "pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb" satisfied condition "success or failure"
Jan 31 21:32:11.923: INFO: Trying to get logs from node jerma-node pod pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb container configmap-volume-test: 
STEP: delete the pod
Jan 31 21:32:12.060: INFO: Waiting for pod pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb to disappear
Jan 31 21:32:12.079: INFO: Pod pod-configmaps-cc350921-426b-4dba-bd2c-a5619f75b2cb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:32:12.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9948" for this suite.

• [SLOW TEST:8.665 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":942,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:32:12.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-3dc7565e-2311-4f03-b99e-90cdb4358ec7
STEP: Creating a pod to test consume configMaps
Jan 31 21:32:12.436: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8" in namespace "projected-9856" to be "success or failure"
Jan 31 21:32:12.487: INFO: Pod "pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.917335ms
Jan 31 21:32:14.495: INFO: Pod "pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058468754s
Jan 31 21:32:16.506: INFO: Pod "pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069867303s
Jan 31 21:32:18.514: INFO: Pod "pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077375369s
Jan 31 21:32:20.520: INFO: Pod "pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08339644s
STEP: Saw pod success
Jan 31 21:32:20.520: INFO: Pod "pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8" satisfied condition "success or failure"
Jan 31 21:32:20.523: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 21:32:20.566: INFO: Waiting for pod pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8 to disappear
Jan 31 21:32:20.574: INFO: Pod pod-projected-configmaps-a3bb49d8-85dc-4ceb-bb5d-6271ab4de0b8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:32:20.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9856" for this suite.

• [SLOW TEST:8.481 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":963,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:32:20.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3227, will wait for the garbage collector to delete the pods
Jan 31 21:32:30.769: INFO: Deleting Job.batch foo took: 14.94341ms
Jan 31 21:32:31.069: INFO: Terminating Job.batch foo pods took: 300.553996ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:33:12.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3227" for this suite.

• [SLOW TEST:51.955 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":60,"skipped":1002,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:33:12.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0131 21:33:24.550246       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 21:33:24.550: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:33:24.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8855" for this suite.

• [SLOW TEST:12.024 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":61,"skipped":1012,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:33:24.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8997
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8997
STEP: Creating statefulset with conflicting port in namespace statefulset-8997
STEP: Waiting until pod test-pod will start running in namespace statefulset-8997
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8997
Jan 31 21:33:34.805: INFO: Observed stateful pod in namespace: statefulset-8997, name: ss-0, uid: 24076a17-6022-4bc9-9d1b-715b90fd4826, status phase: Pending. Waiting for statefulset controller to delete.
Jan 31 21:33:43.069: INFO: Observed stateful pod in namespace: statefulset-8997, name: ss-0, uid: 24076a17-6022-4bc9-9d1b-715b90fd4826, status phase: Failed. Waiting for statefulset controller to delete.
Jan 31 21:33:43.116: INFO: Observed stateful pod in namespace: statefulset-8997, name: ss-0, uid: 24076a17-6022-4bc9-9d1b-715b90fd4826, status phase: Failed. Waiting for statefulset controller to delete.
Jan 31 21:33:43.173: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8997
STEP: Removing pod with conflicting port in namespace statefulset-8997
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8997 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 31 21:33:53.575: INFO: Deleting all statefulset in ns statefulset-8997
Jan 31 21:33:53.581: INFO: Scaling statefulset ss to 0
Jan 31 21:34:03.615: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 21:34:03.623: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:34:03.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8997" for this suite.

• [SLOW TEST:39.159 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":62,"skipped":1023,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:34:03.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 31 21:34:04.038: INFO: Number of nodes with available pods: 0
Jan 31 21:34:04.038: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:05.351: INFO: Number of nodes with available pods: 0
Jan 31 21:34:05.351: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:06.498: INFO: Number of nodes with available pods: 0
Jan 31 21:34:06.498: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:07.052: INFO: Number of nodes with available pods: 0
Jan 31 21:34:07.053: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:08.049: INFO: Number of nodes with available pods: 0
Jan 31 21:34:08.049: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:09.210: INFO: Number of nodes with available pods: 0
Jan 31 21:34:09.210: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:11.088: INFO: Number of nodes with available pods: 0
Jan 31 21:34:11.088: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:12.095: INFO: Number of nodes with available pods: 0
Jan 31 21:34:12.095: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:13.064: INFO: Number of nodes with available pods: 1
Jan 31 21:34:13.064: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 21:34:14.061: INFO: Number of nodes with available pods: 2
Jan 31 21:34:14.061: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 31 21:34:14.110: INFO: Number of nodes with available pods: 1
Jan 31 21:34:14.110: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:15.117: INFO: Number of nodes with available pods: 1
Jan 31 21:34:15.117: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:16.122: INFO: Number of nodes with available pods: 1
Jan 31 21:34:16.122: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:17.123: INFO: Number of nodes with available pods: 1
Jan 31 21:34:17.123: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:18.121: INFO: Number of nodes with available pods: 1
Jan 31 21:34:18.121: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:19.122: INFO: Number of nodes with available pods: 1
Jan 31 21:34:19.122: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:20.120: INFO: Number of nodes with available pods: 1
Jan 31 21:34:20.120: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:21.119: INFO: Number of nodes with available pods: 1
Jan 31 21:34:21.119: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:22.271: INFO: Number of nodes with available pods: 1
Jan 31 21:34:22.271: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:23.127: INFO: Number of nodes with available pods: 1
Jan 31 21:34:23.127: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:24.122: INFO: Number of nodes with available pods: 1
Jan 31 21:34:24.122: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:25.126: INFO: Number of nodes with available pods: 1
Jan 31 21:34:25.126: INFO: Node jerma-node is running more than one daemon pod
Jan 31 21:34:26.129: INFO: Number of nodes with available pods: 2
Jan 31 21:34:26.129: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9655, will wait for the garbage collector to delete the pods
Jan 31 21:34:26.208: INFO: Deleting DaemonSet.extensions daemon-set took: 18.291647ms
Jan 31 21:34:26.309: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.692091ms
Jan 31 21:34:42.429: INFO: Number of nodes with available pods: 0
Jan 31 21:34:42.429: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 21:34:42.434: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9655/daemonsets","resourceVersion":"5598962"},"items":null}

Jan 31 21:34:42.439: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9655/pods","resourceVersion":"5598962"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:34:42.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9655" for this suite.

• [SLOW TEST:38.734 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":63,"skipped":1029,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:34:42.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 21:34:43.252: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 21:34:45.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:34:47.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:34:49.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:34:51.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103283, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 21:34:54.313: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:34:54.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9596" for this suite.
STEP: Destroying namespace "webhook-9596-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.180 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":64,"skipped":1053,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:34:54.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 21:34:55.432: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 21:34:57.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:34:59.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:35:01.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:35:03.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103295, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 21:35:06.495: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:35:06.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6656" for this suite.
STEP: Destroying namespace "webhook-6656-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.161 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":65,"skipped":1060,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:35:06.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-963
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-963
STEP: creating replication controller externalsvc in namespace services-963
I0131 21:35:07.155724       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-963, replica count: 2
I0131 21:35:10.207252       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:35:13.207604       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:35:16.207991       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:35:19.208386       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jan 31 21:35:19.255: INFO: Creating new exec pod
Jan 31 21:35:25.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-963 execpod57rnf -- /bin/sh -x -c nslookup clusterip-service'
Jan 31 21:35:27.844: INFO: stderr: "I0131 21:35:27.639259     379 log.go:172] (0xc0004b6000) (0xc0005ea640) Create stream\nI0131 21:35:27.639449     379 log.go:172] (0xc0004b6000) (0xc0005ea640) Stream added, broadcasting: 1\nI0131 21:35:27.644701     379 log.go:172] (0xc0004b6000) Reply frame received for 1\nI0131 21:35:27.644770     379 log.go:172] (0xc0004b6000) (0xc0006c7d60) Create stream\nI0131 21:35:27.644781     379 log.go:172] (0xc0004b6000) (0xc0006c7d60) Stream added, broadcasting: 3\nI0131 21:35:27.647064     379 log.go:172] (0xc0004b6000) Reply frame received for 3\nI0131 21:35:27.647121     379 log.go:172] (0xc0004b6000) (0xc000449400) Create stream\nI0131 21:35:27.647140     379 log.go:172] (0xc0004b6000) (0xc000449400) Stream added, broadcasting: 5\nI0131 21:35:27.649817     379 log.go:172] (0xc0004b6000) Reply frame received for 5\nI0131 21:35:27.744650     379 log.go:172] (0xc0004b6000) Data frame received for 5\nI0131 21:35:27.744757     379 log.go:172] (0xc000449400) (5) Data frame handling\nI0131 21:35:27.744780     379 log.go:172] (0xc000449400) (5) Data frame sent\n+ nslookup clusterip-service\nI0131 21:35:27.760371     379 log.go:172] (0xc0004b6000) Data frame received for 3\nI0131 21:35:27.760417     379 log.go:172] (0xc0006c7d60) (3) Data frame handling\nI0131 21:35:27.760434     379 log.go:172] (0xc0006c7d60) (3) Data frame sent\nI0131 21:35:27.762031     379 log.go:172] (0xc0004b6000) Data frame received for 3\nI0131 21:35:27.762076     379 log.go:172] (0xc0006c7d60) (3) Data frame handling\nI0131 21:35:27.762093     379 log.go:172] (0xc0006c7d60) (3) Data frame sent\nI0131 21:35:27.836805     379 log.go:172] (0xc0004b6000) (0xc000449400) Stream removed, broadcasting: 5\nI0131 21:35:27.837040     379 log.go:172] (0xc0004b6000) Data frame received for 1\nI0131 21:35:27.837070     379 log.go:172] (0xc0005ea640) (1) Data frame handling\nI0131 21:35:27.837084     379 log.go:172] (0xc0005ea640) (1) Data frame sent\nI0131 21:35:27.837112     379 log.go:172] (0xc0004b6000) (0xc0005ea640) Stream removed, broadcasting: 1\nI0131 21:35:27.837871     379 log.go:172] (0xc0004b6000) (0xc0006c7d60) Stream removed, broadcasting: 3\nI0131 21:35:27.837979     379 log.go:172] (0xc0004b6000) Go away received\nI0131 21:35:27.838100     379 log.go:172] (0xc0004b6000) (0xc0005ea640) Stream removed, broadcasting: 1\nI0131 21:35:27.838117     379 log.go:172] (0xc0004b6000) (0xc0006c7d60) Stream removed, broadcasting: 3\nI0131 21:35:27.838122     379 log.go:172] (0xc0004b6000) (0xc000449400) Stream removed, broadcasting: 5\n"
Jan 31 21:35:27.844: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-963.svc.cluster.local\tcanonical name = externalsvc.services-963.svc.cluster.local.\nName:\texternalsvc.services-963.svc.cluster.local\nAddress: 10.96.220.66\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-963, will wait for the garbage collector to delete the pods
Jan 31 21:35:27.921: INFO: Deleting ReplicationController externalsvc took: 18.843721ms
Jan 31 21:35:28.021: INFO: Terminating ReplicationController externalsvc pods took: 100.40416ms
Jan 31 21:35:43.322: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:35:43.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-963" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:36.599 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":66,"skipped":1098,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:35:43.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-3228c313-c46b-4149-a39f-79556cb1ee91
STEP: Creating a pod to test consume secrets
Jan 31 21:35:43.552: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23" in namespace "projected-1599" to be "success or failure"
Jan 31 21:35:43.559: INFO: Pod "pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.851196ms
Jan 31 21:35:45.664: INFO: Pod "pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111205504s
Jan 31 21:35:47.675: INFO: Pod "pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122278635s
Jan 31 21:35:49.681: INFO: Pod "pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128910891s
Jan 31 21:35:51.688: INFO: Pod "pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135432713s
Jan 31 21:35:53.699: INFO: Pod "pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146804653s
STEP: Saw pod success
Jan 31 21:35:53.699: INFO: Pod "pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23" satisfied condition "success or failure"
Jan 31 21:35:53.716: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 21:35:53.795: INFO: Waiting for pod pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23 to disappear
Jan 31 21:35:53.832: INFO: Pod pod-projected-secrets-d138ac6d-a7d4-4fb4-bfd3-1da091de9a23 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:35:53.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1599" for this suite.

• [SLOW TEST:10.449 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1104,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:35:53.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-7c168822-cc26-4cc4-9141-640b86ce3149
STEP: Creating a pod to test consume configMaps
Jan 31 21:35:53.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c" in namespace "configmap-4080" to be "success or failure"
Jan 31 21:35:54.007: INFO: Pod "pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.315788ms
Jan 31 21:35:56.014: INFO: Pod "pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018445274s
Jan 31 21:35:58.023: INFO: Pod "pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027051774s
Jan 31 21:36:00.038: INFO: Pod "pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041694117s
STEP: Saw pod success
Jan 31 21:36:00.038: INFO: Pod "pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c" satisfied condition "success or failure"
Jan 31 21:36:00.045: INFO: Trying to get logs from node jerma-node pod pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c container configmap-volume-test: 
STEP: delete the pod
Jan 31 21:36:00.096: INFO: Waiting for pod pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c to disappear
Jan 31 21:36:00.100: INFO: Pod pod-configmaps-2818201f-aa5c-4624-b574-f085829bf14c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:36:00.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4080" for this suite.

• [SLOW TEST:6.282 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1118,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:36:00.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:36:00.263: INFO: Creating ReplicaSet my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a
Jan 31 21:36:00.334: INFO: Pod name my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a: Found 0 pods out of 1
Jan 31 21:36:05.353: INFO: Pod name my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a: Found 1 pods out of 1
Jan 31 21:36:05.353: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a" is running
Jan 31 21:36:09.419: INFO: Pod "my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a-llk8n" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 21:36:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 21:36:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 21:36:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 21:36:00 +0000 UTC Reason: Message:}])
Jan 31 21:36:09.419: INFO: Trying to dial the pod
Jan 31 21:36:14.444: INFO: Controller my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a: Got expected result from replica 1 [my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a-llk8n]: "my-hostname-basic-b6bfc9d2-a7ac-4651-9397-0f746095f44a-llk8n", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:36:14.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1706" for this suite.

• [SLOW TEST:14.318 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":69,"skipped":1156,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:36:14.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:36:14.594: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:36:15.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1110" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":70,"skipped":1164,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:36:15.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1121
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-1121
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1121
Jan 31 21:36:16.262: INFO: Found 0 stateful pods, waiting for 1
Jan 31 21:36:26.268: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 31 21:36:26.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 21:36:26.661: INFO: stderr: "I0131 21:36:26.421802     398 log.go:172] (0xc0000f4580) (0xc0006e50e0) Create stream\nI0131 21:36:26.421909     398 log.go:172] (0xc0000f4580) (0xc0006e50e0) Stream added, broadcasting: 1\nI0131 21:36:26.424785     398 log.go:172] (0xc0000f4580) Reply frame received for 1\nI0131 21:36:26.424842     398 log.go:172] (0xc0000f4580) (0xc00070edc0) Create stream\nI0131 21:36:26.424857     398 log.go:172] (0xc0000f4580) (0xc00070edc0) Stream added, broadcasting: 3\nI0131 21:36:26.427255     398 log.go:172] (0xc0000f4580) Reply frame received for 3\nI0131 21:36:26.427331     398 log.go:172] (0xc0000f4580) (0xc0006fc0a0) Create stream\nI0131 21:36:26.427353     398 log.go:172] (0xc0000f4580) (0xc0006fc0a0) Stream added, broadcasting: 5\nI0131 21:36:26.429668     398 log.go:172] (0xc0000f4580) Reply frame received for 5\nI0131 21:36:26.513652     398 log.go:172] (0xc0000f4580) Data frame received for 5\nI0131 21:36:26.513724     398 log.go:172] (0xc0006fc0a0) (5) Data frame handling\nI0131 21:36:26.513742     398 log.go:172] (0xc0006fc0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 21:36:26.560049     398 log.go:172] (0xc0000f4580) Data frame received for 3\nI0131 21:36:26.560127     398 log.go:172] (0xc00070edc0) (3) Data frame handling\nI0131 21:36:26.560167     398 log.go:172] (0xc00070edc0) (3) Data frame sent\nI0131 21:36:26.645109     398 log.go:172] (0xc0000f4580) (0xc00070edc0) Stream removed, broadcasting: 3\nI0131 21:36:26.645379     398 log.go:172] (0xc0000f4580) Data frame received for 1\nI0131 21:36:26.645404     398 log.go:172] (0xc0006e50e0) (1) Data frame handling\nI0131 21:36:26.645430     398 log.go:172] (0xc0006e50e0) (1) Data frame sent\nI0131 21:36:26.645454     398 log.go:172] (0xc0000f4580) (0xc0006e50e0) Stream removed, broadcasting: 1\nI0131 21:36:26.645554     398 log.go:172] (0xc0000f4580) (0xc0006fc0a0) Stream removed, broadcasting: 5\nI0131 21:36:26.645615     398 log.go:172] (0xc0000f4580) Go away received\nI0131 21:36:26.646222     398 log.go:172] (0xc0000f4580) (0xc0006e50e0) Stream removed, broadcasting: 1\nI0131 21:36:26.646233     398 log.go:172] (0xc0000f4580) (0xc00070edc0) Stream removed, broadcasting: 3\nI0131 21:36:26.646237     398 log.go:172] (0xc0000f4580) (0xc0006fc0a0) Stream removed, broadcasting: 5\n"
Jan 31 21:36:26.661: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 21:36:26.661: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 21:36:26.669: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 31 21:36:36.675: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 21:36:36.675: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 21:36:36.700: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 31 21:36:36.700: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  }]
Jan 31 21:36:36.700: INFO: 
Jan 31 21:36:36.700: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 31 21:36:37.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989888818s
Jan 31 21:36:38.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979160465s
Jan 31 21:36:39.747: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950301668s
Jan 31 21:36:40.759: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.943462004s
Jan 31 21:36:42.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.930979477s
Jan 31 21:36:43.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.107995124s
Jan 31 21:36:44.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.946605009s
Jan 31 21:36:45.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 872.083768ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1121
Jan 31 21:36:46.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:36:47.325: INFO: stderr: "I0131 21:36:47.112773     415 log.go:172] (0xc0000f5760) (0xc00060bcc0) Create stream\nI0131 21:36:47.113861     415 log.go:172] (0xc0000f5760) (0xc00060bcc0) Stream added, broadcasting: 1\nI0131 21:36:47.118158     415 log.go:172] (0xc0000f5760) Reply frame received for 1\nI0131 21:36:47.118287     415 log.go:172] (0xc0000f5760) (0xc00060bd60) Create stream\nI0131 21:36:47.118299     415 log.go:172] (0xc0000f5760) (0xc00060bd60) Stream added, broadcasting: 3\nI0131 21:36:47.120128     415 log.go:172] (0xc0000f5760) Reply frame received for 3\nI0131 21:36:47.120155     415 log.go:172] (0xc0000f5760) (0xc0007c4140) Create stream\nI0131 21:36:47.120167     415 log.go:172] (0xc0000f5760) (0xc0007c4140) Stream added, broadcasting: 5\nI0131 21:36:47.121886     415 log.go:172] (0xc0000f5760) Reply frame received for 5\nI0131 21:36:47.213082     415 log.go:172] (0xc0000f5760) Data frame received for 3\nI0131 21:36:47.213191     415 log.go:172] (0xc00060bd60) (3) Data frame handling\nI0131 21:36:47.213201     415 log.go:172] (0xc00060bd60) (3) Data frame sent\nI0131 21:36:47.213235     415 log.go:172] (0xc0000f5760) Data frame received for 5\nI0131 21:36:47.213253     415 log.go:172] (0xc0007c4140) (5) Data frame handling\nI0131 21:36:47.213276     415 log.go:172] (0xc0007c4140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 21:36:47.317638     415 log.go:172] (0xc0000f5760) Data frame received for 1\nI0131 21:36:47.317683     415 log.go:172] (0xc00060bcc0) (1) Data frame handling\nI0131 21:36:47.317694     415 log.go:172] (0xc00060bcc0) (1) Data frame sent\nI0131 21:36:47.317722     415 log.go:172] (0xc0000f5760) (0xc00060bcc0) Stream removed, broadcasting: 1\nI0131 21:36:47.317741     415 log.go:172] (0xc0000f5760) (0xc00060bd60) Stream removed, broadcasting: 3\nI0131 21:36:47.317927     415 log.go:172] (0xc0000f5760) (0xc0007c4140) Stream removed, broadcasting: 5\nI0131 21:36:47.317973     415 log.go:172] (0xc0000f5760) Go away received\nI0131 21:36:47.318435     415 log.go:172] (0xc0000f5760) (0xc00060bcc0) Stream removed, broadcasting: 1\nI0131 21:36:47.318451     415 log.go:172] (0xc0000f5760) (0xc00060bd60) Stream removed, broadcasting: 3\nI0131 21:36:47.318460     415 log.go:172] (0xc0000f5760) (0xc0007c4140) Stream removed, broadcasting: 5\n"
Jan 31 21:36:47.326: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 21:36:47.326: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 21:36:47.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:36:47.689: INFO: stderr: "I0131 21:36:47.491743     430 log.go:172] (0xc0009a4b00) (0xc0006ebea0) Create stream\nI0131 21:36:47.491910     430 log.go:172] (0xc0009a4b00) (0xc0006ebea0) Stream added, broadcasting: 1\nI0131 21:36:47.495165     430 log.go:172] (0xc0009a4b00) Reply frame received for 1\nI0131 21:36:47.495197     430 log.go:172] (0xc0009a4b00) (0xc000522000) Create stream\nI0131 21:36:47.495208     430 log.go:172] (0xc0009a4b00) (0xc000522000) Stream added, broadcasting: 3\nI0131 21:36:47.496041     430 log.go:172] (0xc0009a4b00) Reply frame received for 3\nI0131 21:36:47.496066     430 log.go:172] (0xc0009a4b00) (0xc000522140) Create stream\nI0131 21:36:47.496083     430 log.go:172] (0xc0009a4b00) (0xc000522140) Stream added, broadcasting: 5\nI0131 21:36:47.497808     430 log.go:172] (0xc0009a4b00) Reply frame received for 5\nI0131 21:36:47.595966     430 log.go:172] (0xc0009a4b00) Data frame received for 5\nI0131 21:36:47.596016     430 log.go:172] (0xc000522140) (5) Data frame handling\nI0131 21:36:47.596039     430 log.go:172] (0xc000522140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 21:36:47.596848     430 log.go:172] (0xc0009a4b00) Data frame received for 3\nI0131 21:36:47.596863     430 log.go:172] (0xc000522000) (3) Data frame handling\nI0131 21:36:47.596871     430 log.go:172] (0xc000522000) (3) Data frame sent\nI0131 21:36:47.596890     430 log.go:172] (0xc0009a4b00) Data frame received for 5\nI0131 21:36:47.596896     430 log.go:172] (0xc000522140) (5) Data frame handling\nI0131 21:36:47.596902     430 log.go:172] (0xc000522140) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0131 21:36:47.597076     430 log.go:172] (0xc0009a4b00) Data frame received for 5\nI0131 21:36:47.597100     430 log.go:172] (0xc000522140) (5) Data frame handling\nI0131 21:36:47.597113     430 log.go:172] (0xc000522140) (5) Data frame sent\n+ true\nI0131 21:36:47.679539     430 log.go:172] (0xc0009a4b00) Data frame received for 1\nI0131 21:36:47.679603     430 log.go:172] (0xc0009a4b00) (0xc000522140) Stream removed, broadcasting: 5\nI0131 21:36:47.679671     430 log.go:172] (0xc0006ebea0) (1) Data frame handling\nI0131 21:36:47.679692     430 log.go:172] (0xc0006ebea0) (1) Data frame sent\nI0131 21:36:47.679735     430 log.go:172] (0xc0009a4b00) (0xc000522000) Stream removed, broadcasting: 3\nI0131 21:36:47.679775     430 log.go:172] (0xc0009a4b00) (0xc0006ebea0) Stream removed, broadcasting: 1\nI0131 21:36:47.679812     430 log.go:172] (0xc0009a4b00) Go away received\nI0131 21:36:47.680438     430 log.go:172] (0xc0009a4b00) (0xc0006ebea0) Stream removed, broadcasting: 1\nI0131 21:36:47.680459     430 log.go:172] (0xc0009a4b00) (0xc000522000) Stream removed, broadcasting: 3\nI0131 21:36:47.680469     430 log.go:172] (0xc0009a4b00) (0xc000522140) Stream removed, broadcasting: 5\n"
Jan 31 21:36:47.689: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 21:36:47.689: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 21:36:47.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:36:48.098: INFO: stderr: "I0131 21:36:47.929605     452 log.go:172] (0xc00061b3f0) (0xc000aaa640) Create stream\nI0131 21:36:47.929714     452 log.go:172] (0xc00061b3f0) (0xc000aaa640) Stream added, broadcasting: 1\nI0131 21:36:47.946360     452 log.go:172] (0xc00061b3f0) Reply frame received for 1\nI0131 21:36:47.946493     452 log.go:172] (0xc00061b3f0) (0xc000685d60) Create stream\nI0131 21:36:47.946520     452 log.go:172] (0xc00061b3f0) (0xc000685d60) Stream added, broadcasting: 3\nI0131 21:36:47.949224     452 log.go:172] (0xc00061b3f0) Reply frame received for 3\nI0131 21:36:47.949305     452 log.go:172] (0xc00061b3f0) (0xc000685e00) Create stream\nI0131 21:36:47.949323     452 log.go:172] (0xc00061b3f0) (0xc000685e00) Stream added, broadcasting: 5\nI0131 21:36:47.950773     452 log.go:172] (0xc00061b3f0) Reply frame received for 5\nI0131 21:36:48.016755     452 log.go:172] (0xc00061b3f0) Data frame received for 3\nI0131 21:36:48.016818     452 log.go:172] (0xc00061b3f0) Data frame received for 5\nI0131 21:36:48.016861     452 log.go:172] (0xc000685e00) (5) Data frame handling\nI0131 21:36:48.016881     452 log.go:172] (0xc000685e00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0131 21:36:48.016919     452 log.go:172] (0xc000685d60) (3) Data frame handling\nI0131 21:36:48.016955     452 log.go:172] (0xc000685d60) (3) Data frame sent\nI0131 21:36:48.085332     452 log.go:172] (0xc00061b3f0) Data frame received for 1\nI0131 21:36:48.085587     452 log.go:172] (0xc00061b3f0) (0xc000685d60) Stream removed, broadcasting: 3\nI0131 21:36:48.085634     452 log.go:172] (0xc000aaa640) (1) Data frame handling\nI0131 21:36:48.085657     452 log.go:172] (0xc000aaa640) (1) Data frame sent\nI0131 21:36:48.085690     452 log.go:172] (0xc00061b3f0) (0xc000685e00) Stream removed, broadcasting: 5\nI0131 21:36:48.085718     452 log.go:172] (0xc00061b3f0) (0xc000aaa640) Stream removed, broadcasting: 1\nI0131 21:36:48.085742     452 log.go:172] (0xc00061b3f0) Go away received\nI0131 21:36:48.086891     452 log.go:172] (0xc00061b3f0) (0xc000aaa640) Stream removed, broadcasting: 1\nI0131 21:36:48.086915     452 log.go:172] (0xc00061b3f0) (0xc000685d60) Stream removed, broadcasting: 3\nI0131 21:36:48.086925     452 log.go:172] (0xc00061b3f0) (0xc000685e00) Stream removed, broadcasting: 5\n"
Jan 31 21:36:48.099: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 21:36:48.099: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 21:36:48.109: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 21:36:48.110: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 21:36:48.110: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 31 21:36:48.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 21:36:48.489: INFO: stderr: "I0131 21:36:48.297358     474 log.go:172] (0xc000a26000) (0xc000638780) Create stream\nI0131 21:36:48.297622     474 log.go:172] (0xc000a26000) (0xc000638780) Stream added, broadcasting: 1\nI0131 21:36:48.303930     474 log.go:172] (0xc000a26000) Reply frame received for 1\nI0131 21:36:48.304104     474 log.go:172] (0xc000a26000) (0xc000791540) Create stream\nI0131 21:36:48.304131     474 log.go:172] (0xc000a26000) (0xc000791540) Stream added, broadcasting: 3\nI0131 21:36:48.306843     474 log.go:172] (0xc000a26000) Reply frame received for 3\nI0131 21:36:48.306973     474 log.go:172] (0xc000a26000) (0xc0009ee000) Create stream\nI0131 21:36:48.307018     474 log.go:172] (0xc000a26000) (0xc0009ee000) Stream added, broadcasting: 5\nI0131 21:36:48.313571     474 log.go:172] (0xc000a26000) Reply frame received for 5\nI0131 21:36:48.393346     474 log.go:172] (0xc000a26000) Data frame received for 5\nI0131 21:36:48.393419     474 log.go:172] (0xc0009ee000) (5) Data frame handling\nI0131 21:36:48.393465     474 log.go:172] (0xc0009ee000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 21:36:48.394173     474 log.go:172] (0xc000a26000) Data frame received for 3\nI0131 21:36:48.394354     474 log.go:172] (0xc000791540) (3) Data frame handling\nI0131 21:36:48.394401     474 log.go:172] (0xc000791540) (3) Data frame sent\nI0131 21:36:48.467131     474 log.go:172] (0xc000a26000) Data frame received for 1\nI0131 21:36:48.467827     474 log.go:172] (0xc000a26000) (0xc000791540) Stream removed, broadcasting: 3\nI0131 21:36:48.468082     474 log.go:172] (0xc000638780) (1) Data frame handling\nI0131 21:36:48.468329     474 log.go:172] (0xc000638780) (1) Data frame sent\nI0131 21:36:48.468534     474 log.go:172] (0xc000a26000) (0xc000638780) Stream removed, broadcasting: 1\nI0131 21:36:48.470309     474 log.go:172] (0xc000a26000) (0xc0009ee000) Stream removed, broadcasting: 5\nI0131 21:36:48.470395     474 log.go:172] (0xc000a26000) Go away received\nI0131 21:36:48.470507     474 log.go:172] (0xc000a26000) (0xc000638780) Stream removed, broadcasting: 1\nI0131 21:36:48.470537     474 log.go:172] (0xc000a26000) (0xc000791540) Stream removed, broadcasting: 3\nI0131 21:36:48.470570     474 log.go:172] (0xc000a26000) (0xc0009ee000) Stream removed, broadcasting: 5\n"
Jan 31 21:36:48.489: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 21:36:48.489: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 21:36:48.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 21:36:48.844: INFO: stderr: "I0131 21:36:48.654673     496 log.go:172] (0xc0000f5760) (0xc0008aa820) Create stream\nI0131 21:36:48.654922     496 log.go:172] (0xc0000f5760) (0xc0008aa820) Stream added, broadcasting: 1\nI0131 21:36:48.661853     496 log.go:172] (0xc0000f5760) Reply frame received for 1\nI0131 21:36:48.661948     496 log.go:172] (0xc0000f5760) (0xc0006bfc20) Create stream\nI0131 21:36:48.661960     496 log.go:172] (0xc0000f5760) (0xc0006bfc20) Stream added, broadcasting: 3\nI0131 21:36:48.663073     496 log.go:172] (0xc0000f5760) Reply frame received for 3\nI0131 21:36:48.663098     496 log.go:172] (0xc0000f5760) (0xc000632820) Create stream\nI0131 21:36:48.663106     496 log.go:172] (0xc0000f5760) (0xc000632820) Stream added, broadcasting: 5\nI0131 21:36:48.664216     496 log.go:172] (0xc0000f5760) Reply frame received for 5\nI0131 21:36:48.734663     496 log.go:172] (0xc0000f5760) Data frame received for 5\nI0131 21:36:48.734742     496 log.go:172] (0xc000632820) (5) Data frame handling\nI0131 21:36:48.734764     496 log.go:172] (0xc000632820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 21:36:48.763884     496 log.go:172] (0xc0000f5760) Data frame received for 3\nI0131 21:36:48.763902     496 log.go:172] (0xc0006bfc20) (3) Data frame handling\nI0131 21:36:48.763916     496 log.go:172] (0xc0006bfc20) (3) Data frame sent\nI0131 21:36:48.831507     496 log.go:172] (0xc0000f5760) Data frame received for 1\nI0131 21:36:48.831953     496 log.go:172] (0xc0008aa820) (1) Data frame handling\nI0131 21:36:48.831978     496 log.go:172] (0xc0008aa820) (1) Data frame sent\nI0131 21:36:48.832008     496 log.go:172] (0xc0000f5760) (0xc0008aa820) Stream removed, broadcasting: 1\nI0131 21:36:48.832469     496 log.go:172] (0xc0000f5760) (0xc0006bfc20) Stream removed, broadcasting: 3\nI0131 21:36:48.832507     496 log.go:172] (0xc0000f5760) (0xc000632820) Stream removed, broadcasting: 5\nI0131 21:36:48.832555     496 log.go:172] (0xc0000f5760) Go away received\nI0131 21:36:48.832901     496 log.go:172] (0xc0000f5760) (0xc0008aa820) Stream removed, broadcasting: 1\nI0131 21:36:48.832926     496 log.go:172] (0xc0000f5760) (0xc0006bfc20) Stream removed, broadcasting: 3\nI0131 21:36:48.832946     496 log.go:172] (0xc0000f5760) (0xc000632820) Stream removed, broadcasting: 5\n"
Jan 31 21:36:48.844: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 21:36:48.844: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 21:36:48.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 21:36:49.277: INFO: stderr: "I0131 21:36:49.048528     512 log.go:172] (0xc000a9cfd0) (0xc0008ce640) Create stream\nI0131 21:36:49.048728     512 log.go:172] (0xc000a9cfd0) (0xc0008ce640) Stream added, broadcasting: 1\nI0131 21:36:49.067448     512 log.go:172] (0xc000a9cfd0) Reply frame received for 1\nI0131 21:36:49.067587     512 log.go:172] (0xc000a9cfd0) (0xc000654640) Create stream\nI0131 21:36:49.067609     512 log.go:172] (0xc000a9cfd0) (0xc000654640) Stream added, broadcasting: 3\nI0131 21:36:49.069054     512 log.go:172] (0xc000a9cfd0) Reply frame received for 3\nI0131 21:36:49.069244     512 log.go:172] (0xc000a9cfd0) (0xc0004f7400) Create stream\nI0131 21:36:49.069273     512 log.go:172] (0xc000a9cfd0) (0xc0004f7400) Stream added, broadcasting: 5\nI0131 21:36:49.070516     512 log.go:172] (0xc000a9cfd0) Reply frame received for 5\nI0131 21:36:49.148472     512 log.go:172] (0xc000a9cfd0) Data frame received for 5\nI0131 21:36:49.148619     512 log.go:172] (0xc0004f7400) (5) Data frame handling\nI0131 21:36:49.148721     512 log.go:172] (0xc0004f7400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 21:36:49.194292     512 log.go:172] (0xc000a9cfd0) Data frame received for 3\nI0131 21:36:49.194358     512 log.go:172] (0xc000654640) (3) Data frame handling\nI0131 21:36:49.194401     512 log.go:172] (0xc000654640) (3) Data frame sent\nI0131 21:36:49.265170     512 log.go:172] (0xc000a9cfd0) Data frame received for 1\nI0131 21:36:49.265389     512 log.go:172] (0xc000a9cfd0) (0xc0004f7400) Stream removed, broadcasting: 5\nI0131 21:36:49.265434     512 log.go:172] (0xc0008ce640) (1) Data frame handling\nI0131 21:36:49.265475     512 log.go:172] (0xc0008ce640) (1) Data frame sent\nI0131 21:36:49.265528     512 log.go:172] (0xc000a9cfd0) (0xc000654640) Stream removed, broadcasting: 3\nI0131 21:36:49.265563     512 log.go:172] (0xc000a9cfd0) (0xc0008ce640) Stream removed, broadcasting: 1\nI0131 21:36:49.265582     512 log.go:172] (0xc000a9cfd0) Go away received\nI0131 21:36:49.266470     512 log.go:172] (0xc000a9cfd0) (0xc0008ce640) Stream removed, broadcasting: 1\nI0131 21:36:49.266481     512 log.go:172] (0xc000a9cfd0) (0xc000654640) Stream removed, broadcasting: 3\nI0131 21:36:49.266491     512 log.go:172] (0xc000a9cfd0) (0xc0004f7400) Stream removed, broadcasting: 5\n"
Jan 31 21:36:49.277: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 21:36:49.278: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 21:36:49.278: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 21:36:49.283: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 31 21:36:59.305: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 21:36:59.305: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 21:36:59.305: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 21:36:59.340: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:36:59.340: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  }]
Jan 31 21:36:59.340: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:36:59.341: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:36:59.341: INFO: 
Jan 31 21:36:59.341: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 21:37:01.131: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:37:01.131: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  }]
Jan 31 21:37:01.131: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:01.131: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:01.131: INFO: 
Jan 31 21:37:01.131: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 21:37:02.145: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:37:02.145: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  }]
Jan 31 21:37:02.145: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:02.145: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:02.145: INFO: 
Jan 31 21:37:02.145: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 21:37:03.690: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:37:03.690: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  }]
Jan 31 21:37:03.691: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:03.691: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:03.691: INFO: 
Jan 31 21:37:03.691: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 21:37:04.698: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:37:04.698: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  }]
Jan 31 21:37:04.698: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:04.698: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:04.698: INFO: 
Jan 31 21:37:04.698: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 21:37:05.758: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:37:05.758: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:16 +0000 UTC  }]
Jan 31 21:37:05.758: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:05.758: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:05.759: INFO: 
Jan 31 21:37:05.759: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 21:37:06.763: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:37:06.763: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:06.763: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:06.763: INFO: 
Jan 31 21:37:06.763: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 31 21:37:07.774: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:37:07.774: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:07.774: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:07.774: INFO: 
Jan 31 21:37:07.774: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 31 21:37:08.784: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 21:37:08.784: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:08.784: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 21:36:36 +0000 UTC  }]
Jan 31 21:37:08.784: INFO: 
Jan 31 21:37:08.784: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1121
Jan 31 21:37:09.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:37:10.042: INFO: rc: 1
Jan 31 21:37:10.042: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 31 21:37:20.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:37:20.186: INFO: rc: 1
Jan 31 21:37:20.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:37:30.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:37:30.373: INFO: rc: 1
Jan 31 21:37:30.373: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:37:40.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:37:40.498: INFO: rc: 1
Jan 31 21:37:40.498: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:37:50.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:37:50.651: INFO: rc: 1
Jan 31 21:37:50.651: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:38:00.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:38:00.807: INFO: rc: 1
Jan 31 21:38:00.807: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:38:10.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:38:10.969: INFO: rc: 1
Jan 31 21:38:10.969: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:38:20.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:38:21.156: INFO: rc: 1
Jan 31 21:38:21.157: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:38:31.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:38:31.313: INFO: rc: 1
Jan 31 21:38:31.313: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:38:41.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:38:41.502: INFO: rc: 1
Jan 31 21:38:41.502: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:38:51.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:38:51.712: INFO: rc: 1
Jan 31 21:38:51.712: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:39:01.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:39:01.920: INFO: rc: 1
Jan 31 21:39:01.921: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:39:11.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:39:12.101: INFO: rc: 1
Jan 31 21:39:12.101: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:39:22.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:39:22.395: INFO: rc: 1
Jan 31 21:39:22.396: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:39:32.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:39:32.603: INFO: rc: 1
Jan 31 21:39:32.603: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:39:42.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:39:42.785: INFO: rc: 1
Jan 31 21:39:42.785: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:39:52.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:39:52.963: INFO: rc: 1
Jan 31 21:39:52.963: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:40:02.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:40:03.115: INFO: rc: 1
Jan 31 21:40:03.115: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:40:13.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:40:13.266: INFO: rc: 1
Jan 31 21:40:13.267: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:40:23.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:40:23.372: INFO: rc: 1
Jan 31 21:40:23.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:40:33.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:40:33.529: INFO: rc: 1
Jan 31 21:40:33.529: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:40:43.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:40:43.717: INFO: rc: 1
Jan 31 21:40:43.717: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:40:53.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:40:53.912: INFO: rc: 1
Jan 31 21:40:53.912: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:41:03.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:41:04.079: INFO: rc: 1
Jan 31 21:41:04.080: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:41:14.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:41:14.199: INFO: rc: 1
Jan 31 21:41:14.199: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:41:24.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:41:24.273: INFO: rc: 1
Jan 31 21:41:24.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:41:34.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:41:34.415: INFO: rc: 1
Jan 31 21:41:34.415: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:41:44.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:41:44.612: INFO: rc: 1
Jan 31 21:41:44.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:41:54.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:41:54.750: INFO: rc: 1
Jan 31 21:41:54.751: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:42:04.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:42:04.941: INFO: rc: 1
Jan 31 21:42:04.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 31 21:42:14.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1121 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 21:42:15.144: INFO: rc: 1
Jan 31 21:42:15.144: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Jan 31 21:42:15.144: INFO: Scaling statefulset ss to 0
Jan 31 21:42:15.168: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 31 21:42:15.172: INFO: Deleting all statefulset in ns statefulset-1121
Jan 31 21:42:15.176: INFO: Scaling statefulset ss to 0
Jan 31 21:42:15.186: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 21:42:15.190: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:42:15.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1121" for this suite.

• [SLOW TEST:359.298 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":71,"skipped":1168,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:42:15.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:42:15.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6535" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":72,"skipped":1181,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:42:15.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 21:42:15.879: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Jan 31 21:42:17.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:42:19.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:42:21.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103736, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716103735, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 21:42:24.979: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:42:24.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6194-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:42:26.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6997" for this suite.
STEP: Destroying namespace "webhook-6997-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.975 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":73,"skipped":1209,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:42:26.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3770.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3770.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3770.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3770.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3770.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3770.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3770.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3770.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3770.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3770.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.244.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.244.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.244.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.244.193_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3770.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3770.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3770.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3770.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3770.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3770.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3770.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3770.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3770.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3770.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3770.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.244.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.244.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.244.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.244.193_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 21:42:38.716: INFO: Unable to read wheezy_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:38.720: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:38.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:38.731: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:38.761: INFO: Unable to read jessie_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:38.766: INFO: Unable to read jessie_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:38.770: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:38.774: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:38.794: INFO: Lookups using dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776 failed for: [wheezy_udp@dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_udp@dns-test-service.dns-3770.svc.cluster.local jessie_tcp@dns-test-service.dns-3770.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local]

Jan 31 21:42:43.814: INFO: Unable to read wheezy_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:43.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:43.836: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:43.842: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:43.913: INFO: Unable to read jessie_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:43.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:43.941: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:43.953: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:44.010: INFO: Lookups using dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776 failed for: [wheezy_udp@dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_udp@dns-test-service.dns-3770.svc.cluster.local jessie_tcp@dns-test-service.dns-3770.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local]

Jan 31 21:42:48.803: INFO: Unable to read wheezy_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:48.811: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:48.816: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:48.820: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:48.882: INFO: Unable to read jessie_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:48.888: INFO: Unable to read jessie_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:48.894: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:48.898: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:48.932: INFO: Lookups using dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776 failed for: [wheezy_udp@dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_udp@dns-test-service.dns-3770.svc.cluster.local jessie_tcp@dns-test-service.dns-3770.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local]

Jan 31 21:42:53.803: INFO: Unable to read wheezy_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:53.808: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:53.813: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:53.816: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:53.841: INFO: Unable to read jessie_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:53.844: INFO: Unable to read jessie_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:53.847: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:53.851: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:53.887: INFO: Lookups using dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776 failed for: [wheezy_udp@dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_udp@dns-test-service.dns-3770.svc.cluster.local jessie_tcp@dns-test-service.dns-3770.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local]

Jan 31 21:42:58.802: INFO: Unable to read wheezy_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:58.807: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:58.812: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:58.816: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:58.849: INFO: Unable to read jessie_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:58.854: INFO: Unable to read jessie_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:58.863: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:58.872: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:42:58.908: INFO: Lookups using dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776 failed for: [wheezy_udp@dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_udp@dns-test-service.dns-3770.svc.cluster.local jessie_tcp@dns-test-service.dns-3770.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local]

Jan 31 21:43:03.808: INFO: Unable to read wheezy_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:43:03.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:43:03.839: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:43:03.860: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:43:03.976: INFO: Unable to read jessie_udp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:43:03.981: INFO: Unable to read jessie_tcp@dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:43:03.987: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:43:03.991: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local from pod dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776: the server could not find the requested resource (get pods dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776)
Jan 31 21:43:04.029: INFO: Lookups using dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776 failed for: [wheezy_udp@dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@dns-test-service.dns-3770.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_udp@dns-test-service.dns-3770.svc.cluster.local jessie_tcp@dns-test-service.dns-3770.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3770.svc.cluster.local]

Jan 31 21:43:08.872: INFO: DNS probes using dns-3770/dns-test-38d62bd8-0fab-431a-9e61-dc9836e88776 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:43:09.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3770" for this suite.

• [SLOW TEST:42.839 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":74,"skipped":1255,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:43:09.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:43:26.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4400" for this suite.

• [SLOW TEST:17.314 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":75,"skipped":1268,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:43:26.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:43:26.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 31 21:43:30.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4384 create -f -'
Jan 31 21:43:32.925: INFO: stderr: ""
Jan 31 21:43:32.925: INFO: stdout: "e2e-test-crd-publish-openapi-6542-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 31 21:43:32.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4384 delete e2e-test-crd-publish-openapi-6542-crds test-cr'
Jan 31 21:43:33.133: INFO: stderr: ""
Jan 31 21:43:33.133: INFO: stdout: "e2e-test-crd-publish-openapi-6542-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 31 21:43:33.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4384 apply -f -'
Jan 31 21:43:33.481: INFO: stderr: ""
Jan 31 21:43:33.481: INFO: stdout: "e2e-test-crd-publish-openapi-6542-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 31 21:43:33.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4384 delete e2e-test-crd-publish-openapi-6542-crds test-cr'
Jan 31 21:43:33.704: INFO: stderr: ""
Jan 31 21:43:33.704: INFO: stdout: "e2e-test-crd-publish-openapi-6542-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 31 21:43:33.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6542-crds'
Jan 31 21:43:34.085: INFO: stderr: ""
Jan 31 21:43:34.085: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6542-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:43:36.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4384" for this suite.

• [SLOW TEST:9.641 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":76,"skipped":1272,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:43:36.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 21:43:36.356: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240" in namespace "projected-5148" to be "success or failure"
Jan 31 21:43:36.383: INFO: Pod "downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240": Phase="Pending", Reason="", readiness=false. Elapsed: 26.866075ms
Jan 31 21:43:38.391: INFO: Pod "downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034870841s
Jan 31 21:43:40.397: INFO: Pod "downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041349364s
Jan 31 21:43:42.408: INFO: Pod "downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051775352s
Jan 31 21:43:44.414: INFO: Pod "downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058479291s
STEP: Saw pod success
Jan 31 21:43:44.415: INFO: Pod "downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240" satisfied condition "success or failure"
Jan 31 21:43:44.419: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240 container client-container: 
STEP: delete the pod
Jan 31 21:43:44.494: INFO: Waiting for pod downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240 to disappear
Jan 31 21:43:44.501: INFO: Pod downwardapi-volume-bc1df9ed-015e-4807-a403-6a2fe87a5240 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:43:44.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5148" for this suite.

• [SLOW TEST:8.334 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1280,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:43:44.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-83ab19b1-8880-406a-bd45-932104462294
STEP: Creating a pod to test consume secrets
Jan 31 21:43:44.673: INFO: Waiting up to 5m0s for pod "pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad" in namespace "secrets-8918" to be "success or failure"
Jan 31 21:43:44.712: INFO: Pod "pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad": Phase="Pending", Reason="", readiness=false. Elapsed: 38.647654ms
Jan 31 21:43:46.743: INFO: Pod "pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069681413s
Jan 31 21:43:48.775: INFO: Pod "pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102068629s
Jan 31 21:43:50.799: INFO: Pod "pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125786094s
Jan 31 21:43:52.804: INFO: Pod "pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131136822s
STEP: Saw pod success
Jan 31 21:43:52.804: INFO: Pod "pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad" satisfied condition "success or failure"
Jan 31 21:43:52.807: INFO: Trying to get logs from node jerma-node pod pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad container secret-volume-test: 
STEP: delete the pod
Jan 31 21:43:52.985: INFO: Waiting for pod pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad to disappear
Jan 31 21:43:53.000: INFO: Pod pod-secrets-22ec879c-ada7-4b21-a0ab-911ba77d8cad no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:43:53.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8918" for this suite.

• [SLOW TEST:8.503 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1285,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:43:53.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 31 21:43:53.824: INFO: Pod name wrapped-volume-race-5c21532f-1485-4b5c-bcd3-f86dae4e1adf: Found 0 pods out of 5
Jan 31 21:44:00.623: INFO: Pod name wrapped-volume-race-5c21532f-1485-4b5c-bcd3-f86dae4e1adf: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5c21532f-1485-4b5c-bcd3-f86dae4e1adf in namespace emptydir-wrapper-1922, will wait for the garbage collector to delete the pods
Jan 31 21:44:25.673: INFO: Deleting ReplicationController wrapped-volume-race-5c21532f-1485-4b5c-bcd3-f86dae4e1adf took: 49.028778ms
Jan 31 21:44:26.074: INFO: Terminating ReplicationController wrapped-volume-race-5c21532f-1485-4b5c-bcd3-f86dae4e1adf pods took: 401.165889ms
STEP: Creating RC which spawns configmap-volume pods
Jan 31 21:44:43.630: INFO: Pod name wrapped-volume-race-46382d56-0c8c-4a41-a298-f43ed08a3fc7: Found 0 pods out of 5
Jan 31 21:44:48.642: INFO: Pod name wrapped-volume-race-46382d56-0c8c-4a41-a298-f43ed08a3fc7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-46382d56-0c8c-4a41-a298-f43ed08a3fc7 in namespace emptydir-wrapper-1922, will wait for the garbage collector to delete the pods
Jan 31 21:45:12.791: INFO: Deleting ReplicationController wrapped-volume-race-46382d56-0c8c-4a41-a298-f43ed08a3fc7 took: 12.170951ms
Jan 31 21:45:13.292: INFO: Terminating ReplicationController wrapped-volume-race-46382d56-0c8c-4a41-a298-f43ed08a3fc7 pods took: 500.503052ms
STEP: Creating RC which spawns configmap-volume pods
Jan 31 21:45:33.438: INFO: Pod name wrapped-volume-race-72279066-38a6-4a7e-a281-33a611063001: Found 0 pods out of 5
Jan 31 21:45:38.476: INFO: Pod name wrapped-volume-race-72279066-38a6-4a7e-a281-33a611063001: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-72279066-38a6-4a7e-a281-33a611063001 in namespace emptydir-wrapper-1922, will wait for the garbage collector to delete the pods
Jan 31 21:46:08.741: INFO: Deleting ReplicationController wrapped-volume-race-72279066-38a6-4a7e-a281-33a611063001 took: 12.510889ms
Jan 31 21:46:09.242: INFO: Terminating ReplicationController wrapped-volume-race-72279066-38a6-4a7e-a281-33a611063001 pods took: 501.284818ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:46:24.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1922" for this suite.

• [SLOW TEST:151.351 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":79,"skipped":1314,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:46:24.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:46:58.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7534" for this suite.

• [SLOW TEST:34.118 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":80,"skipped":1349,"failed":0}
S
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:46:58.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:46:58.691: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-ca1a6919-46c0-45d1-8ac5-06858f265b35" in namespace "security-context-test-5964" to be "success or failure"
Jan 31 21:46:58.700: INFO: Pod "alpine-nnp-false-ca1a6919-46c0-45d1-8ac5-06858f265b35": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32077ms
Jan 31 21:47:00.708: INFO: Pod "alpine-nnp-false-ca1a6919-46c0-45d1-8ac5-06858f265b35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016703248s
Jan 31 21:47:02.713: INFO: Pod "alpine-nnp-false-ca1a6919-46c0-45d1-8ac5-06858f265b35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0213136s
Jan 31 21:47:04.720: INFO: Pod "alpine-nnp-false-ca1a6919-46c0-45d1-8ac5-06858f265b35": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02860895s
Jan 31 21:47:06.727: INFO: Pod "alpine-nnp-false-ca1a6919-46c0-45d1-8ac5-06858f265b35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035559115s
Jan 31 21:47:06.727: INFO: Pod "alpine-nnp-false-ca1a6919-46c0-45d1-8ac5-06858f265b35" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:47:06.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5964" for this suite.

• [SLOW TEST:8.389 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1350,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:47:06.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-a091e46b-7cb2-4dc4-b470-ea6217efe6cd
STEP: Creating configMap with name cm-test-opt-upd-c9636bf3-7928-43a1-9871-162bcca133eb
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a091e46b-7cb2-4dc4-b470-ea6217efe6cd
STEP: Updating configmap cm-test-opt-upd-c9636bf3-7928-43a1-9871-162bcca133eb
STEP: Creating configMap with name cm-test-opt-create-97512e0c-4a53-4841-8674-fcd618b5727c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:47:23.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8946" for this suite.

• [SLOW TEST:16.448 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1382,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:47:23.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-fbe826b5-1d7d-4bbd-9c1d-3a2590ff7861 in namespace container-probe-3716
Jan 31 21:47:31.505: INFO: Started pod liveness-fbe826b5-1d7d-4bbd-9c1d-3a2590ff7861 in namespace container-probe-3716
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 21:47:31.511: INFO: Initial restart count of pod liveness-fbe826b5-1d7d-4bbd-9c1d-3a2590ff7861 is 0
Jan 31 21:47:49.578: INFO: Restart count of pod container-probe-3716/liveness-fbe826b5-1d7d-4bbd-9c1d-3a2590ff7861 is now 1 (18.066516805s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:47:49.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3716" for this suite.

• [SLOW TEST:26.277 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1420,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:47:49.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 31 21:48:00.296: INFO: Successfully updated pod "annotationupdate4a3ba938-f38c-4074-9cdd-071f3496e4c5"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:48:02.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7145" for this suite.

• [SLOW TEST:12.724 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1434,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:48:02.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 31 21:48:02.516: INFO: Waiting up to 5m0s for pod "pod-75620896-2892-412b-8b7e-f9b91a0c8103" in namespace "emptydir-936" to be "success or failure"
Jan 31 21:48:02.522: INFO: Pod "pod-75620896-2892-412b-8b7e-f9b91a0c8103": Phase="Pending", Reason="", readiness=false. Elapsed: 5.522062ms
Jan 31 21:48:04.530: INFO: Pod "pod-75620896-2892-412b-8b7e-f9b91a0c8103": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014328601s
Jan 31 21:48:06.538: INFO: Pod "pod-75620896-2892-412b-8b7e-f9b91a0c8103": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022439562s
Jan 31 21:48:08.546: INFO: Pod "pod-75620896-2892-412b-8b7e-f9b91a0c8103": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030325167s
Jan 31 21:48:10.598: INFO: Pod "pod-75620896-2892-412b-8b7e-f9b91a0c8103": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08235053s
STEP: Saw pod success
Jan 31 21:48:10.599: INFO: Pod "pod-75620896-2892-412b-8b7e-f9b91a0c8103" satisfied condition "success or failure"
Jan 31 21:48:10.604: INFO: Trying to get logs from node jerma-node pod pod-75620896-2892-412b-8b7e-f9b91a0c8103 container test-container: 
STEP: delete the pod
Jan 31 21:48:10.683: INFO: Waiting for pod pod-75620896-2892-412b-8b7e-f9b91a0c8103 to disappear
Jan 31 21:48:10.693: INFO: Pod pod-75620896-2892-412b-8b7e-f9b91a0c8103 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:48:10.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-936" for this suite.

• [SLOW TEST:8.414 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1443,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:48:10.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-2227/configmap-test-118e99af-8cc4-4488-b79a-61dc7498b411
STEP: Creating a pod to test consume configMaps
Jan 31 21:48:10.918: INFO: Waiting up to 5m0s for pod "pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c" in namespace "configmap-2227" to be "success or failure"
Jan 31 21:48:10.934: INFO: Pod "pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.57913ms
Jan 31 21:48:12.941: INFO: Pod "pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02252497s
Jan 31 21:48:14.947: INFO: Pod "pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028336223s
Jan 31 21:48:16.956: INFO: Pod "pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037005502s
Jan 31 21:48:18.963: INFO: Pod "pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044492713s
STEP: Saw pod success
Jan 31 21:48:18.963: INFO: Pod "pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c" satisfied condition "success or failure"
Jan 31 21:48:18.968: INFO: Trying to get logs from node jerma-node pod pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c container env-test: 
STEP: delete the pod
Jan 31 21:48:19.208: INFO: Waiting for pod pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c to disappear
Jan 31 21:48:19.230: INFO: Pod pod-configmaps-31254aa6-961a-4eea-a8fd-e60d34b90d5c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:48:19.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2227" for this suite.

• [SLOW TEST:8.491 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1446,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:48:19.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 31 21:48:19.502: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Jan 31 21:48:19.930: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 31 21:48:22.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104100, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:48:24.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104100, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:48:26.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104100, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:48:28.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104100, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104099, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:48:31.043: INFO: Waited 934.711961ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:48:31.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8558" for this suite.

• [SLOW TEST:12.490 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":87,"skipped":1452,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:48:31.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:48:31.851: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:48:36.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3534" for this suite.

• [SLOW TEST:5.226 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":88,"skipped":1462,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:48:36.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 31 21:48:45.325: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4994 PodName:pod-sharedvolume-491a8dbc-fe3e-491b-9f5c-724e256ff01c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 21:48:45.325: INFO: >>> kubeConfig: /root/.kube/config
I0131 21:48:45.379119       8 log.go:172] (0xc002e2e160) (0xc00154e320) Create stream
I0131 21:48:45.379228       8 log.go:172] (0xc002e2e160) (0xc00154e320) Stream added, broadcasting: 1
I0131 21:48:45.382499       8 log.go:172] (0xc002e2e160) Reply frame received for 1
I0131 21:48:45.382533       8 log.go:172] (0xc002e2e160) (0xc0011bed20) Create stream
I0131 21:48:45.382566       8 log.go:172] (0xc002e2e160) (0xc0011bed20) Stream added, broadcasting: 3
I0131 21:48:45.383919       8 log.go:172] (0xc002e2e160) Reply frame received for 3
I0131 21:48:45.383954       8 log.go:172] (0xc002e2e160) (0xc0010d8dc0) Create stream
I0131 21:48:45.383966       8 log.go:172] (0xc002e2e160) (0xc0010d8dc0) Stream added, broadcasting: 5
I0131 21:48:45.385579       8 log.go:172] (0xc002e2e160) Reply frame received for 5
I0131 21:48:45.449114       8 log.go:172] (0xc002e2e160) Data frame received for 3
I0131 21:48:45.449199       8 log.go:172] (0xc0011bed20) (3) Data frame handling
I0131 21:48:45.449224       8 log.go:172] (0xc0011bed20) (3) Data frame sent
I0131 21:48:45.516670       8 log.go:172] (0xc002e2e160) Data frame received for 1
I0131 21:48:45.516831       8 log.go:172] (0xc00154e320) (1) Data frame handling
I0131 21:48:45.516860       8 log.go:172] (0xc00154e320) (1) Data frame sent
I0131 21:48:45.516877       8 log.go:172] (0xc002e2e160) (0xc00154e320) Stream removed, broadcasting: 1
I0131 21:48:45.519225       8 log.go:172] (0xc002e2e160) (0xc0011bed20) Stream removed, broadcasting: 3
I0131 21:48:45.519456       8 log.go:172] (0xc002e2e160) (0xc0010d8dc0) Stream removed, broadcasting: 5
I0131 21:48:45.519518       8 log.go:172] (0xc002e2e160) Go away received
I0131 21:48:45.519721       8 log.go:172] (0xc002e2e160) (0xc00154e320) Stream removed, broadcasting: 1
I0131 21:48:45.519768       8 log.go:172] (0xc002e2e160) (0xc0011bed20) Stream removed, broadcasting: 3
I0131 21:48:45.519780       8 log.go:172] (0xc002e2e160) (0xc0010d8dc0) Stream removed, broadcasting: 5
Jan 31 21:48:45.519: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:48:45.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4994" for this suite.

• [SLOW TEST:8.578 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":89,"skipped":1469,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:48:45.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-6947/secret-test-e45aa0e9-59db-47d3-a2bd-8a1a20ad3941
STEP: Creating a pod to test consume secrets
Jan 31 21:48:45.658: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b" in namespace "secrets-6947" to be "success or failure"
Jan 31 21:48:45.673: INFO: Pod "pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.084552ms
Jan 31 21:48:47.680: INFO: Pod "pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022400725s
Jan 31 21:48:49.687: INFO: Pod "pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029579068s
Jan 31 21:48:51.694: INFO: Pod "pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036096038s
Jan 31 21:48:53.713: INFO: Pod "pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055394955s
STEP: Saw pod success
Jan 31 21:48:53.713: INFO: Pod "pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b" satisfied condition "success or failure"
Jan 31 21:48:53.719: INFO: Trying to get logs from node jerma-node pod pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b container env-test: 
STEP: delete the pod
Jan 31 21:48:53.808: INFO: Waiting for pod pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b to disappear
Jan 31 21:48:54.014: INFO: Pod pod-configmaps-dbd2ebd9-46b3-400d-8053-8f8e514d887b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:48:54.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6947" for this suite.

• [SLOW TEST:8.485 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1472,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:48:54.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-7613/configmap-test-52a06950-6a3d-4ebb-b75b-c18c6634c5f0
STEP: Creating a pod to test consume configMaps
Jan 31 21:48:54.252: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10" in namespace "configmap-7613" to be "success or failure"
Jan 31 21:48:54.292: INFO: Pod "pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10": Phase="Pending", Reason="", readiness=false. Elapsed: 40.273768ms
Jan 31 21:48:56.352: INFO: Pod "pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10028196s
Jan 31 21:48:58.361: INFO: Pod "pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109101767s
Jan 31 21:49:00.371: INFO: Pod "pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119101911s
Jan 31 21:49:02.375: INFO: Pod "pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12337206s
STEP: Saw pod success
Jan 31 21:49:02.375: INFO: Pod "pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10" satisfied condition "success or failure"
Jan 31 21:49:02.377: INFO: Trying to get logs from node jerma-node pod pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10 container env-test: 
STEP: delete the pod
Jan 31 21:49:02.405: INFO: Waiting for pod pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10 to disappear
Jan 31 21:49:02.435: INFO: Pod pod-configmaps-2f5c32a9-054e-4acf-8999-4e1175142c10 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:49:02.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7613" for this suite.

• [SLOW TEST:8.421 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1485,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:49:02.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:49:02.530: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:49:03.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1112" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":92,"skipped":1504,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:49:03.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:49:11.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9234" for this suite.

• [SLOW TEST:8.216 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1530,"failed":0}
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:49:11.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 31 21:49:20.660: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4797 pod-service-account-c814a5e5-73c0-4a45-ac90-8a3a7933b7fd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 31 21:49:21.019: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4797 pod-service-account-c814a5e5-73c0-4a45-ac90-8a3a7933b7fd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 31 21:49:21.352: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4797 pod-service-account-c814a5e5-73c0-4a45-ac90-8a3a7933b7fd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:49:21.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4797" for this suite.

• [SLOW TEST:9.826 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":94,"skipped":1534,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:49:21.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 31 21:49:21.976: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:49:42.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7388" for this suite.

• [SLOW TEST:20.664 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1573,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:49:42.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Jan 31 21:49:42.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6563 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jan 31 21:49:42.709: INFO: stderr: ""
Jan 31 21:49:42.710: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Jan 31 21:49:42.710: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jan 31 21:49:42.710: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6563" to be "running and ready, or succeeded"
Jan 31 21:49:42.785: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 74.408642ms
Jan 31 21:49:44.797: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086966938s
Jan 31 21:49:46.804: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093202247s
Jan 31 21:49:48.813: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102771112s
Jan 31 21:49:50.818: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.108057896s
Jan 31 21:49:50.818: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jan 31 21:49:50.818: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jan 31 21:49:50.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6563'
Jan 31 21:49:50.984: INFO: stderr: ""
Jan 31 21:49:50.984: INFO: stdout: "I0131 21:49:48.680842       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/9s7q 315\nI0131 21:49:48.881068       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/x68 589\nI0131 21:49:49.080989       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/ktkg 524\nI0131 21:49:49.281316       1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/tj4 570\nI0131 21:49:49.481301       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/rmwx 366\nI0131 21:49:49.681249       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/tv2 430\nI0131 21:49:49.881175       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/886 288\nI0131 21:49:50.081117       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/5v2b 300\nI0131 21:49:50.281219       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/k4k4 577\nI0131 21:49:50.481262       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/7wqm 213\nI0131 21:49:50.681207       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/q7q 396\nI0131 21:49:50.881191       1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/6tm 535\n"
STEP: limiting log lines
Jan 31 21:49:50.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6563 --tail=1'
Jan 31 21:49:51.193: INFO: stderr: ""
Jan 31 21:49:51.193: INFO: stdout: "I0131 21:49:51.082595       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/pt99 404\n"
Jan 31 21:49:51.193: INFO: got output "I0131 21:49:51.082595       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/pt99 404\n"
STEP: limiting log bytes
Jan 31 21:49:51.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6563 --limit-bytes=1'
Jan 31 21:49:51.290: INFO: stderr: ""
Jan 31 21:49:51.290: INFO: stdout: "I"
Jan 31 21:49:51.290: INFO: got output "I"
STEP: exposing timestamps
Jan 31 21:49:51.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6563 --tail=1 --timestamps'
Jan 31 21:49:51.404: INFO: stderr: ""
Jan 31 21:49:51.404: INFO: stdout: "2020-01-31T21:49:51.282264399Z I0131 21:49:51.281867       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/jvv 205\n"
Jan 31 21:49:51.404: INFO: got output "2020-01-31T21:49:51.282264399Z I0131 21:49:51.281867       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/jvv 205\n"
STEP: restricting to a time range
Jan 31 21:49:53.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6563 --since=1s'
Jan 31 21:49:54.108: INFO: stderr: ""
Jan 31 21:49:54.109: INFO: stdout: "I0131 21:49:53.281134       1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/x98c 318\nI0131 21:49:53.481183       1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/zvp8 491\nI0131 21:49:53.681183       1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/76nb 566\nI0131 21:49:53.881541       1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/89vw 347\nI0131 21:49:54.081200       1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/4ckg 201\n"
Jan 31 21:49:54.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6563 --since=24h'
Jan 31 21:49:54.201: INFO: stderr: ""
Jan 31 21:49:54.201: INFO: stdout: "I0131 21:49:48.680842       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/9s7q 315\nI0131 21:49:48.881068       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/x68 589\nI0131 21:49:49.080989       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/ktkg 524\nI0131 21:49:49.281316       1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/tj4 570\nI0131 21:49:49.481301       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/rmwx 366\nI0131 21:49:49.681249       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/tv2 430\nI0131 21:49:49.881175       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/886 288\nI0131 21:49:50.081117       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/5v2b 300\nI0131 21:49:50.281219       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/k4k4 577\nI0131 21:49:50.481262       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/7wqm 213\nI0131 21:49:50.681207       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/q7q 396\nI0131 21:49:50.881191       1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/6tm 535\nI0131 21:49:51.082595       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/pt99 404\nI0131 21:49:51.281867       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/jvv 205\nI0131 21:49:51.481232       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/prtm 572\nI0131 21:49:51.681246       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/g5b 265\nI0131 21:49:51.881190       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/nks 586\nI0131 21:49:52.081682       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/xpn 311\nI0131 21:49:52.281662       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/j7z 384\nI0131 21:49:52.481251       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/lxwz 598\nI0131 21:49:52.681701       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/kwp 250\nI0131 21:49:52.881400       1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/njd 505\nI0131 21:49:53.081230       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/g25 287\nI0131 21:49:53.281134       1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/x98c 318\nI0131 21:49:53.481183       1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/zvp8 491\nI0131 21:49:53.681183       1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/76nb 566\nI0131 21:49:53.881541       1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/89vw 347\nI0131 21:49:54.081200       1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/4ckg 201\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Jan 31 21:49:54.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6563'
Jan 31 21:49:58.545: INFO: stderr: ""
Jan 31 21:49:58.545: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:49:58.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6563" for this suite.

• [SLOW TEST:16.156 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":96,"skipped":1586,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:49:58.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 21:49:59.210: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 21:50:01.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:50:03.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 21:50:05.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716104199, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 21:50:08.334: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jan 31 21:50:16.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4552 to-be-attached-pod -i -c=container1'
Jan 31 21:50:16.671: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:50:16.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4552" for this suite.
STEP: Destroying namespace "webhook-4552-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.266 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":97,"skipped":1591,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:50:16.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 31 21:50:27.970: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:50:29.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9450" for this suite.

• [SLOW TEST:12.200 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":98,"skipped":1605,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:50:29.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 21:50:42.948: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:50:43.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5856" for this suite.

• [SLOW TEST:14.236 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1609,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:50:43.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:50:54.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2550" for this suite.

• [SLOW TEST:11.277 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":100,"skipped":1637,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:50:54.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 31 21:50:54.645: INFO: Waiting up to 5m0s for pod "pod-fce94700-a3dc-4498-a340-aa06fbe216eb" in namespace "emptydir-3734" to be "success or failure"
Jan 31 21:50:54.675: INFO: Pod "pod-fce94700-a3dc-4498-a340-aa06fbe216eb": Phase="Pending", Reason="", readiness=false. Elapsed: 29.922776ms
Jan 31 21:50:56.688: INFO: Pod "pod-fce94700-a3dc-4498-a340-aa06fbe216eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04274349s
Jan 31 21:50:58.696: INFO: Pod "pod-fce94700-a3dc-4498-a340-aa06fbe216eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050255251s
Jan 31 21:51:00.701: INFO: Pod "pod-fce94700-a3dc-4498-a340-aa06fbe216eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055679922s
Jan 31 21:51:02.710: INFO: Pod "pod-fce94700-a3dc-4498-a340-aa06fbe216eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064814165s
STEP: Saw pod success
Jan 31 21:51:02.711: INFO: Pod "pod-fce94700-a3dc-4498-a340-aa06fbe216eb" satisfied condition "success or failure"
Jan 31 21:51:02.718: INFO: Trying to get logs from node jerma-node pod pod-fce94700-a3dc-4498-a340-aa06fbe216eb container test-container: 
STEP: delete the pod
Jan 31 21:51:02.889: INFO: Waiting for pod pod-fce94700-a3dc-4498-a340-aa06fbe216eb to disappear
Jan 31 21:51:02.900: INFO: Pod pod-fce94700-a3dc-4498-a340-aa06fbe216eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:51:02.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3734" for this suite.

• [SLOW TEST:8.371 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1638,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:51:02.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:51:03.127: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 31 21:51:08.155: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 21:51:12.173: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 31 21:51:20.289: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-1428 /apis/apps/v1/namespaces/deployment-1428/deployments/test-cleanup-deployment 20cea9a3-cea8-4819-a8e3-b58648695c97 5603582 1 2020-01-31 21:51:12 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00433e9d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-31 21:51:12 +0000 UTC,LastTransitionTime:2020-01-31 21:51:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-01-31 21:51:18 +0000 UTC,LastTransitionTime:2020-01-31 21:51:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 31 21:51:20.294: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-1428 /apis/apps/v1/namespaces/deployment-1428/replicasets/test-cleanup-deployment-55ffc6b7b6 25c9d740-381d-4525-b100-20a3862ca3c9 5603572 1 2020-01-31 21:51:12 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 20cea9a3-cea8-4819-a8e3-b58648695c97 0xc00433edb7 0xc00433edb8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00433ee48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 31 21:51:20.298: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-frvv8" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-frvv8 test-cleanup-deployment-55ffc6b7b6- deployment-1428 /api/v1/namespaces/deployment-1428/pods/test-cleanup-deployment-55ffc6b7b6-frvv8 65ff43a3-b998-4aa4-8a28-b98c34ebab63 5603571 0 2020-01-31 21:51:12 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 25c9d740-381d-4525-b100-20a3862ca3c9 0xc002316f97 0xc002316f98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8qt6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8qt6g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8qt6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 21:51:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 21:51:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 21:51:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 21:51:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-31 21:51:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 21:51:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://eba0c46f95e7781f969bcf3f55cdc22fef00da6641dd5a16e7870d09ec531c28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:51:20.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1428" for this suite.

• [SLOW TEST:17.402 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":102,"skipped":1643,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:51:20.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 31 21:51:20.477: INFO: >>> kubeConfig: /root/.kube/config
Jan 31 21:51:23.516: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:51:35.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9628" for this suite.

• [SLOW TEST:15.031 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":103,"skipped":1677,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:51:35.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jan 31 21:51:35.413: INFO: namespace kubectl-5587
Jan 31 21:51:35.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5587'
Jan 31 21:51:35.898: INFO: stderr: ""
Jan 31 21:51:35.899: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 31 21:51:36.910: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:36.910: INFO: Found 0 / 1
Jan 31 21:51:37.920: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:37.920: INFO: Found 0 / 1
Jan 31 21:51:38.914: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:38.914: INFO: Found 0 / 1
Jan 31 21:51:39.913: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:39.914: INFO: Found 0 / 1
Jan 31 21:51:40.947: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:40.947: INFO: Found 0 / 1
Jan 31 21:51:41.908: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:41.908: INFO: Found 0 / 1
Jan 31 21:51:42.909: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:42.909: INFO: Found 0 / 1
Jan 31 21:51:43.909: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:43.909: INFO: Found 1 / 1
Jan 31 21:51:43.909: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 31 21:51:43.914: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:51:43.914: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 31 21:51:43.914: INFO: wait on agnhost-master startup in kubectl-5587 
Jan 31 21:51:43.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-jf4mv agnhost-master --namespace=kubectl-5587'
Jan 31 21:51:44.149: INFO: stderr: ""
Jan 31 21:51:44.149: INFO: stdout: "Paused\n"
STEP: exposing RC
Jan 31 21:51:44.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5587'
Jan 31 21:51:44.377: INFO: stderr: ""
Jan 31 21:51:44.377: INFO: stdout: "service/rm2 exposed\n"
Jan 31 21:51:44.386: INFO: Service rm2 in namespace kubectl-5587 found.
STEP: exposing service
Jan 31 21:51:46.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5587'
Jan 31 21:51:46.627: INFO: stderr: ""
Jan 31 21:51:46.627: INFO: stdout: "service/rm3 exposed\n"
Jan 31 21:51:46.640: INFO: Service rm3 in namespace kubectl-5587 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:51:48.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5587" for this suite.

• [SLOW TEST:13.342 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":104,"skipped":1704,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:51:48.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 31 21:51:48.807: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 21:51:48.827: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 21:51:48.832: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 21:51:48.842: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.842: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 21:51:48.842: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 21:51:48.843: INFO: 	Container weave ready: true, restart count 1
Jan 31 21:51:48.843: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 21:51:48.843: INFO: agnhost-master-jf4mv from kubectl-5587 started at 2020-01-31 21:51:36 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.843: INFO: 	Container agnhost-master ready: true, restart count 0
Jan 31 21:51:48.843: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 21:51:48.877: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.877: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 21:51:48.877: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.877: INFO: 	Container etcd ready: true, restart count 1
Jan 31 21:51:48.877: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.877: INFO: 	Container coredns ready: true, restart count 0
Jan 31 21:51:48.877: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.877: INFO: 	Container coredns ready: true, restart count 0
Jan 31 21:51:48.877: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 21:51:48.877: INFO: 	Container weave ready: true, restart count 0
Jan 31 21:51:48.877: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 21:51:48.877: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.877: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 21:51:48.877: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.877: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 21:51:48.877: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 21:51:48.877: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1b057379-7656-4b6d-a5a5-0b26d60d82db 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-1b057379-7656-4b6d-a5a5-0b26d60d82db off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1b057379-7656-4b6d-a5a5-0b26d60d82db
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:52:07.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9908" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:18.474 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":105,"skipped":1724,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:52:07.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-5749dc36-406b-4990-aebc-5e29926d5287
STEP: Creating a pod to test consume secrets
Jan 31 21:52:07.540: INFO: Waiting up to 5m0s for pod "pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5" in namespace "secrets-3976" to be "success or failure"
Jan 31 21:52:07.564: INFO: Pod "pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.850744ms
Jan 31 21:52:09.570: INFO: Pod "pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030174063s
Jan 31 21:52:11.577: INFO: Pod "pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03674469s
Jan 31 21:52:13.582: INFO: Pod "pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0417131s
Jan 31 21:52:15.636: INFO: Pod "pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095871481s
Jan 31 21:52:17.644: INFO: Pod "pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103457217s
STEP: Saw pod success
Jan 31 21:52:17.644: INFO: Pod "pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5" satisfied condition "success or failure"
Jan 31 21:52:17.647: INFO: Trying to get logs from node jerma-node pod pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5 container secret-volume-test: 
STEP: delete the pod
Jan 31 21:52:17.742: INFO: Waiting for pod pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5 to disappear
Jan 31 21:52:17.750: INFO: Pod pod-secrets-c609b942-2061-442b-b440-deb01ba6e0d5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:52:17.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3976" for this suite.

• [SLOW TEST:10.604 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1737,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:52:17.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8871
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 31 21:52:17.982: INFO: Found 0 stateful pods, waiting for 3
Jan 31 21:52:27.994: INFO: Found 2 stateful pods, waiting for 3
Jan 31 21:52:37.994: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 21:52:37.994: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 21:52:37.994: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 21:52:47.993: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 21:52:47.993: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 21:52:47.993: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 31 21:52:48.027: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 31 21:52:58.138: INFO: Updating stateful set ss2
Jan 31 21:52:58.191: INFO: Waiting for Pod statefulset-8871/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 21:53:08.205: INFO: Waiting for Pod statefulset-8871/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 31 21:53:18.362: INFO: Found 2 stateful pods, waiting for 3
Jan 31 21:53:28.372: INFO: Found 2 stateful pods, waiting for 3
Jan 31 21:53:38.383: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 21:53:38.383: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 21:53:38.383: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 31 21:53:38.417: INFO: Updating stateful set ss2
Jan 31 21:53:38.512: INFO: Waiting for Pod statefulset-8871/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 21:53:48.652: INFO: Updating stateful set ss2
Jan 31 21:53:48.787: INFO: Waiting for StatefulSet statefulset-8871/ss2 to complete update
Jan 31 21:53:48.788: INFO: Waiting for Pod statefulset-8871/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 21:53:58.807: INFO: Waiting for StatefulSet statefulset-8871/ss2 to complete update
Jan 31 21:53:58.807: INFO: Waiting for Pod statefulset-8871/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 21:54:08.803: INFO: Waiting for StatefulSet statefulset-8871/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 31 21:54:18.800: INFO: Deleting all statefulset in ns statefulset-8871
Jan 31 21:54:18.804: INFO: Scaling statefulset ss2 to 0
Jan 31 21:54:58.838: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 21:54:58.862: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:54:58.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8871" for this suite.

• [SLOW TEST:161.141 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":107,"skipped":1759,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:54:58.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:54:58.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:55:07.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1626" for this suite.

• [SLOW TEST:8.376 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1788,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:55:07.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-znxtk in namespace proxy-102
I0131 21:55:07.489717       8 runners.go:189] Created replication controller with name: proxy-service-znxtk, namespace: proxy-102, replica count: 1
I0131 21:55:08.540841       8 runners.go:189] proxy-service-znxtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:55:09.541449       8 runners.go:189] proxy-service-znxtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:55:10.542215       8 runners.go:189] proxy-service-znxtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:55:11.542986       8 runners.go:189] proxy-service-znxtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:55:12.544037       8 runners.go:189] proxy-service-znxtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 21:55:13.544538       8 runners.go:189] proxy-service-znxtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 21:55:14.545519       8 runners.go:189] proxy-service-znxtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 21:55:15.546143       8 runners.go:189] proxy-service-znxtk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 21:55:15.553: INFO: setup took 8.098004518s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 31 21:55:15.577: INFO: (0) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:1080/proxy/: t... (200; 22.023805ms)
Jan 31 21:55:15.583: INFO: (0) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 28.903567ms)
Jan 31 21:55:15.596: INFO: (0) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 42.341138ms)
Jan 31 21:55:15.600: INFO: (0) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 46.302492ms)
Jan 31 21:55:15.600: INFO: (0) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtest (200; 47.543338ms)
Jan 31 21:55:15.610: INFO: (0) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 56.327355ms)
Jan 31 21:55:15.610: INFO: (0) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 56.238556ms)
Jan 31 21:55:15.621: INFO: (0) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: t... (200; 48.036668ms)
Jan 31 21:55:15.672: INFO: (1) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 47.800795ms)
Jan 31 21:55:15.674: INFO: (1) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 49.383674ms)
Jan 31 21:55:15.674: INFO: (1) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 49.928296ms)
Jan 31 21:55:15.674: INFO: (1) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtesttest (200; 15.898146ms)
Jan 31 21:55:15.696: INFO: (2) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: t... (200; 16.883338ms)
Jan 31 21:55:15.704: INFO: (3) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 7.242748ms)
Jan 31 21:55:15.705: INFO: (3) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:1080/proxy/: t... (200; 8.119449ms)
Jan 31 21:55:15.706: INFO: (3) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname2/proxy/: bar (200; 8.966697ms)
Jan 31 21:55:15.707: INFO: (3) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname2/proxy/: tls qux (200; 9.653507ms)
Jan 31 21:55:15.707: INFO: (3) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 9.725243ms)
Jan 31 21:55:15.707: INFO: (3) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 9.942118ms)
Jan 31 21:55:15.707: INFO: (3) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 9.995394ms)
Jan 31 21:55:15.707: INFO: (3) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testt... (200; 10.894995ms)
Jan 31 21:55:15.733: INFO: (4) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 16.247655ms)
Jan 31 21:55:15.733: INFO: (4) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 16.234064ms)
Jan 31 21:55:15.733: INFO: (4) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 16.368682ms)
Jan 31 21:55:15.733: INFO: (4) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 16.27424ms)
Jan 31 21:55:15.735: INFO: (4) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname2/proxy/: tls qux (200; 17.996135ms)
Jan 31 21:55:15.735: INFO: (4) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 18.026262ms)
Jan 31 21:55:15.735: INFO: (4) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 18.230944ms)
Jan 31 21:55:15.735: INFO: (4) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname1/proxy/: tls baz (200; 18.39221ms)
Jan 31 21:55:15.735: INFO: (4) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname2/proxy/: bar (200; 18.219901ms)
Jan 31 21:55:15.735: INFO: (4) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtest (200; 18.69523ms)
Jan 31 21:55:15.736: INFO: (4) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname1/proxy/: foo (200; 18.710801ms)
Jan 31 21:55:15.736: INFO: (4) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 18.636232ms)
Jan 31 21:55:15.736: INFO: (4) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 18.669882ms)
Jan 31 21:55:15.745: INFO: (5) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 8.938315ms)
Jan 31 21:55:15.745: INFO: (5) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:1080/proxy/: t... (200; 9.209125ms)
Jan 31 21:55:15.746: INFO: (5) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 10.115949ms)
Jan 31 21:55:15.746: INFO: (5) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 10.440214ms)
Jan 31 21:55:15.746: INFO: (5) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 10.371547ms)
Jan 31 21:55:15.747: INFO: (5) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 10.775702ms)
Jan 31 21:55:15.749: INFO: (5) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: testtestt... (200; 14.466644ms)
Jan 31 21:55:15.769: INFO: (6) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 14.77166ms)
Jan 31 21:55:15.769: INFO: (6) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname2/proxy/: bar (200; 14.877293ms)
Jan 31 21:55:15.769: INFO: (6) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 14.97173ms)
Jan 31 21:55:15.769: INFO: (6) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 15.075408ms)
Jan 31 21:55:15.769: INFO: (6) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 15.174532ms)
Jan 31 21:55:15.769: INFO: (6) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 15.153779ms)
Jan 31 21:55:15.769: INFO: (6) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 15.473189ms)
Jan 31 21:55:15.769: INFO: (6) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname1/proxy/: foo (200; 15.52497ms)
Jan 31 21:55:15.770: INFO: (6) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 15.751357ms)
Jan 31 21:55:15.770: INFO: (6) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname2/proxy/: tls qux (200; 15.871803ms)
Jan 31 21:55:15.777: INFO: (7) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 7.374853ms)
Jan 31 21:55:15.777: INFO: (7) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname1/proxy/: foo (200; 7.562325ms)
Jan 31 21:55:15.777: INFO: (7) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: t... (200; 12.505972ms)
Jan 31 21:55:15.784: INFO: (7) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 14.696156ms)
Jan 31 21:55:15.785: INFO: (7) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 15.005068ms)
Jan 31 21:55:15.785: INFO: (7) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 15.134197ms)
Jan 31 21:55:15.785: INFO: (7) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 15.098138ms)
Jan 31 21:55:15.785: INFO: (7) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtestt... (200; 10.685855ms)
Jan 31 21:55:15.796: INFO: (8) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 10.493292ms)
Jan 31 21:55:15.796: INFO: (8) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 10.726181ms)
Jan 31 21:55:15.796: INFO: (8) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 10.568262ms)
Jan 31 21:55:15.796: INFO: (8) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 10.86077ms)
Jan 31 21:55:15.797: INFO: (8) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 11.325108ms)
Jan 31 21:55:15.798: INFO: (8) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 12.964851ms)
Jan 31 21:55:15.799: INFO: (8) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname1/proxy/: foo (200; 13.573016ms)
Jan 31 21:55:15.799: INFO: (8) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname2/proxy/: tls qux (200; 13.665916ms)
Jan 31 21:55:15.799: INFO: (8) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname1/proxy/: tls baz (200; 13.566382ms)
Jan 31 21:55:15.799: INFO: (8) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 13.619724ms)
Jan 31 21:55:15.806: INFO: (9) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 7.313462ms)
Jan 31 21:55:15.806: INFO: (9) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: t... (200; 7.537479ms)
Jan 31 21:55:15.807: INFO: (9) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 8.153361ms)
Jan 31 21:55:15.810: INFO: (9) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 11.31435ms)
Jan 31 21:55:15.810: INFO: (9) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname1/proxy/: foo (200; 11.496027ms)
Jan 31 21:55:15.810: INFO: (9) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 11.350461ms)
Jan 31 21:55:15.812: INFO: (9) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtestt... (200; 11.610582ms)
Jan 31 21:55:15.827: INFO: (10) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname1/proxy/: tls baz (200; 11.796353ms)
Jan 31 21:55:15.827: INFO: (10) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 11.829848ms)
Jan 31 21:55:15.827: INFO: (10) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 11.971783ms)
Jan 31 21:55:15.827: INFO: (10) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname2/proxy/: tls qux (200; 11.919156ms)
Jan 31 21:55:15.827: INFO: (10) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 11.977433ms)
Jan 31 21:55:15.828: INFO: (10) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 12.714172ms)
Jan 31 21:55:15.828: INFO: (10) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 12.978093ms)
Jan 31 21:55:15.828: INFO: (10) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 13.189456ms)
Jan 31 21:55:15.857: INFO: (11) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 29.077442ms)
Jan 31 21:55:15.858: INFO: (11) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 29.362123ms)
Jan 31 21:55:15.858: INFO: (11) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:1080/proxy/: t... (200; 29.479887ms)
Jan 31 21:55:15.858: INFO: (11) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 29.6207ms)
Jan 31 21:55:15.858: INFO: (11) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname1/proxy/: tls baz (200; 29.945908ms)
Jan 31 21:55:15.860: INFO: (11) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 31.442969ms)
Jan 31 21:55:15.860: INFO: (11) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 31.486087ms)
Jan 31 21:55:15.861: INFO: (11) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 32.179516ms)
Jan 31 21:55:15.862: INFO: (11) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname2/proxy/: tls qux (200; 33.163974ms)
Jan 31 21:55:15.862: INFO: (11) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtest (200; 39.581737ms)
Jan 31 21:55:15.882: INFO: (12) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtest (200; 14.743408ms)
Jan 31 21:55:15.883: INFO: (12) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: t... (200; 15.597934ms)
Jan 31 21:55:15.884: INFO: (12) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 15.95207ms)
Jan 31 21:55:15.885: INFO: (12) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 16.144702ms)
Jan 31 21:55:15.885: INFO: (12) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 16.660599ms)
Jan 31 21:55:15.885: INFO: (12) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 16.288226ms)
Jan 31 21:55:15.903: INFO: (13) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: test (200; 23.546976ms)
Jan 31 21:55:15.910: INFO: (13) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 23.363426ms)
Jan 31 21:55:15.910: INFO: (13) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 22.834955ms)
Jan 31 21:55:15.910: INFO: (13) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 24.520407ms)
Jan 31 21:55:15.913: INFO: (13) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 25.121187ms)
Jan 31 21:55:15.914: INFO: (13) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testt... (200; 29.638437ms)
Jan 31 21:55:15.917: INFO: (13) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 31.35685ms)
Jan 31 21:55:15.917: INFO: (13) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname2/proxy/: tls qux (200; 29.973667ms)
Jan 31 21:55:15.917: INFO: (13) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 31.37994ms)
Jan 31 21:55:15.919: INFO: (13) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname2/proxy/: bar (200; 32.928293ms)
Jan 31 21:55:15.925: INFO: (14) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 5.628316ms)
Jan 31 21:55:15.926: INFO: (14) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 7.033594ms)
Jan 31 21:55:15.931: INFO: (14) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname1/proxy/: foo (200; 11.525848ms)
Jan 31 21:55:15.931: INFO: (14) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:1080/proxy/: t... (200; 11.528408ms)
Jan 31 21:55:15.931: INFO: (14) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 12.071726ms)
Jan 31 21:55:15.931: INFO: (14) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtest (200; 13.273241ms)
Jan 31 21:55:15.933: INFO: (14) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 13.392522ms)
Jan 31 21:55:15.934: INFO: (14) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: t... (200; 8.053474ms)
Jan 31 21:55:15.944: INFO: (15) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 8.596343ms)
Jan 31 21:55:15.944: INFO: (15) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 8.715349ms)
Jan 31 21:55:15.944: INFO: (15) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 8.706367ms)
Jan 31 21:55:15.944: INFO: (15) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 9.287631ms)
Jan 31 21:55:15.944: INFO: (15) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: testtest (200; 8.947908ms)
Jan 31 21:55:15.961: INFO: (16) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 9.998575ms)
Jan 31 21:55:15.961: INFO: (16) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 10.035135ms)
Jan 31 21:55:15.961: INFO: (16) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 10.103207ms)
Jan 31 21:55:15.961: INFO: (16) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: t... (200; 10.247698ms)
Jan 31 21:55:15.961: INFO: (16) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 10.084982ms)
Jan 31 21:55:15.961: INFO: (16) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname1/proxy/: tls baz (200; 10.322697ms)
Jan 31 21:55:15.963: INFO: (16) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: testtesttest (200; 10.042292ms)
Jan 31 21:55:15.975: INFO: (17) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:1080/proxy/: t... (200; 9.997491ms)
Jan 31 21:55:15.975: INFO: (17) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 10.058997ms)
Jan 31 21:55:15.976: INFO: (17) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 10.09606ms)
Jan 31 21:55:15.976: INFO: (17) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 10.197552ms)
Jan 31 21:55:15.976: INFO: (17) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 10.904317ms)
Jan 31 21:55:15.976: INFO: (17) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:460/proxy/: tls baz (200; 10.96114ms)
Jan 31 21:55:15.976: INFO: (17) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 11.036132ms)
Jan 31 21:55:15.977: INFO: (17) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:443/proxy/: testt... (200; 10.441944ms)
Jan 31 21:55:15.988: INFO: (18) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 10.708718ms)
Jan 31 21:55:15.988: INFO: (18) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname2/proxy/: tls qux (200; 11.010642ms)
Jan 31 21:55:15.988: INFO: (18) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 11.137247ms)
Jan 31 21:55:15.988: INFO: (18) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname2/proxy/: bar (200; 11.125472ms)
Jan 31 21:55:15.988: INFO: (18) /api/v1/namespaces/proxy-102/pods/https:proxy-service-znxtk-kwn8z:462/proxy/: tls qux (200; 11.163631ms)
Jan 31 21:55:15.989: INFO: (18) /api/v1/namespaces/proxy-102/services/http:proxy-service-znxtk:portname1/proxy/: foo (200; 11.533666ms)
Jan 31 21:55:15.989: INFO: (18) /api/v1/namespaces/proxy-102/services/https:proxy-service-znxtk:tlsportname1/proxy/: tls baz (200; 11.582504ms)
Jan 31 21:55:15.989: INFO: (18) /api/v1/namespaces/proxy-102/services/proxy-service-znxtk:portname1/proxy/: foo (200; 11.851544ms)
Jan 31 21:55:16.101: INFO: (18) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 123.30325ms)
Jan 31 21:55:16.113: INFO: (19) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:162/proxy/: bar (200; 12.22849ms)
Jan 31 21:55:16.114: INFO: (19) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:160/proxy/: foo (200; 12.918441ms)
Jan 31 21:55:16.117: INFO: (19) /api/v1/namespaces/proxy-102/pods/http:proxy-service-znxtk-kwn8z:1080/proxy/: t... (200; 15.681699ms)
Jan 31 21:55:16.118: INFO: (19) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z/proxy/: test (200; 16.664906ms)
Jan 31 21:55:16.118: INFO: (19) /api/v1/namespaces/proxy-102/pods/proxy-service-znxtk-kwn8z:1080/proxy/: test>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:56:32.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8700" for this suite.

• [SLOW TEST:60.280 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1823,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:56:32.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 21:56:32.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4" in namespace "projected-3527" to be "success or failure"
Jan 31 21:56:32.863: INFO: Pod "downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.608078ms
Jan 31 21:56:34.869: INFO: Pod "downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047114934s
Jan 31 21:56:36.881: INFO: Pod "downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05869435s
Jan 31 21:56:38.887: INFO: Pod "downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065294012s
Jan 31 21:56:40.897: INFO: Pod "downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075148658s
Jan 31 21:56:42.903: INFO: Pod "downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080713719s
STEP: Saw pod success
Jan 31 21:56:42.903: INFO: Pod "downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4" satisfied condition "success or failure"
Jan 31 21:56:42.907: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4 container client-container: 
STEP: delete the pod
Jan 31 21:56:42.959: INFO: Waiting for pod downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4 to disappear
Jan 31 21:56:42.980: INFO: Pod downwardapi-volume-8487823f-e62f-4068-bbf4-ad9f4de85de4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:56:42.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3527" for this suite.

• [SLOW TEST:10.299 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1853,"failed":0}
SSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:56:42.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:56:43.127: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f3eda74d-496c-4049-beb8-e59faf5e9452" in namespace "security-context-test-3684" to be "success or failure"
Jan 31 21:56:43.131: INFO: Pod "busybox-user-65534-f3eda74d-496c-4049-beb8-e59faf5e9452": Phase="Pending", Reason="", readiness=false. Elapsed: 3.675299ms
Jan 31 21:56:45.137: INFO: Pod "busybox-user-65534-f3eda74d-496c-4049-beb8-e59faf5e9452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009348829s
Jan 31 21:56:47.162: INFO: Pod "busybox-user-65534-f3eda74d-496c-4049-beb8-e59faf5e9452": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034740738s
Jan 31 21:56:49.168: INFO: Pod "busybox-user-65534-f3eda74d-496c-4049-beb8-e59faf5e9452": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040827453s
Jan 31 21:56:51.173: INFO: Pod "busybox-user-65534-f3eda74d-496c-4049-beb8-e59faf5e9452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04622009s
Jan 31 21:56:51.174: INFO: Pod "busybox-user-65534-f3eda74d-496c-4049-beb8-e59faf5e9452" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:56:51.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3684" for this suite.

• [SLOW TEST:8.198 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1857,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:56:51.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 31 21:56:51.291: INFO: Waiting up to 5m0s for pod "pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4" in namespace "emptydir-9823" to be "success or failure"
Jan 31 21:56:51.769: INFO: Pod "pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 478.124985ms
Jan 31 21:56:53.777: INFO: Pod "pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486403994s
Jan 31 21:56:55.786: INFO: Pod "pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494718705s
Jan 31 21:56:57.795: INFO: Pod "pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503951117s
Jan 31 21:56:59.803: INFO: Pod "pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.511894479s
STEP: Saw pod success
Jan 31 21:56:59.803: INFO: Pod "pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4" satisfied condition "success or failure"
Jan 31 21:56:59.808: INFO: Trying to get logs from node jerma-node pod pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4 container test-container: 
STEP: delete the pod
Jan 31 21:56:59.904: INFO: Waiting for pod pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4 to disappear
Jan 31 21:56:59.911: INFO: Pod pod-8942085b-6c6b-4d6d-bc61-b70132fe26b4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:56:59.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9823" for this suite.

• [SLOW TEST:8.738 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1870,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:56:59.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 21:57:00.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc" in namespace "projected-9671" to be "success or failure"
Jan 31 21:57:00.052: INFO: Pod "downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.548861ms
Jan 31 21:57:02.060: INFO: Pod "downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010027061s
Jan 31 21:57:04.067: INFO: Pod "downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017417154s
Jan 31 21:57:06.074: INFO: Pod "downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024520062s
Jan 31 21:57:08.080: INFO: Pod "downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0308014s
STEP: Saw pod success
Jan 31 21:57:08.081: INFO: Pod "downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc" satisfied condition "success or failure"
Jan 31 21:57:08.084: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc container client-container: 
STEP: delete the pod
Jan 31 21:57:08.214: INFO: Waiting for pod downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc to disappear
Jan 31 21:57:08.222: INFO: Pod downwardapi-volume-2032aefd-71bc-463a-be26-fcd1fcf486fc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:57:08.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9671" for this suite.

• [SLOW TEST:8.323 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1874,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:57:08.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Jan 31 21:57:08.426: INFO: Waiting up to 5m0s for pod "var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8" in namespace "var-expansion-789" to be "success or failure"
Jan 31 21:57:08.444: INFO: Pod "var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.193855ms
Jan 31 21:57:10.451: INFO: Pod "var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024677576s
Jan 31 21:57:12.458: INFO: Pod "var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031320807s
Jan 31 21:57:14.464: INFO: Pod "var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037264278s
Jan 31 21:57:16.471: INFO: Pod "var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045106671s
STEP: Saw pod success
Jan 31 21:57:16.472: INFO: Pod "var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8" satisfied condition "success or failure"
Jan 31 21:57:16.475: INFO: Trying to get logs from node jerma-node pod var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8 container dapi-container: 
STEP: delete the pod
Jan 31 21:57:16.703: INFO: Waiting for pod var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8 to disappear
Jan 31 21:57:16.721: INFO: Pod var-expansion-9e11edea-e7bd-4abc-859a-83cc0936eed8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:57:16.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-789" for this suite.

• [SLOW TEST:8.568 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1879,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:57:16.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-152f02b8-8312-48d2-b7ec-0bae270bd150 in namespace container-probe-638
Jan 31 21:57:25.075: INFO: Started pod busybox-152f02b8-8312-48d2-b7ec-0bae270bd150 in namespace container-probe-638
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 21:57:25.079: INFO: Initial restart count of pod busybox-152f02b8-8312-48d2-b7ec-0bae270bd150 is 0
Jan 31 21:58:15.680: INFO: Restart count of pod container-probe-638/busybox-152f02b8-8312-48d2-b7ec-0bae270bd150 is now 1 (50.601080883s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:58:15.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-638" for this suite.

• [SLOW TEST:58.971 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1882,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:58:15.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7
Jan 31 21:58:16.086: INFO: Pod name my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7: Found 0 pods out of 1
Jan 31 21:58:21.091: INFO: Pod name my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7: Found 1 pods out of 1
Jan 31 21:58:21.091: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7" are running
Jan 31 21:58:27.099: INFO: Pod "my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7-h9rbf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 21:58:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 21:58:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 21:58:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 21:58:16 +0000 UTC Reason: Message:}])
Jan 31 21:58:27.099: INFO: Trying to dial the pod
Jan 31 21:58:32.119: INFO: Controller my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7: Got expected result from replica 1 [my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7-h9rbf]: "my-hostname-basic-f7dbeae0-44e6-469e-91c8-592263d778a7-h9rbf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:58:32.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1472" for this suite.

• [SLOW TEST:16.355 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":117,"skipped":1884,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:58:32.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 21:58:32.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5711'
Jan 31 21:58:34.572: INFO: stderr: ""
Jan 31 21:58:34.572: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 31 21:58:34.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5711'
Jan 31 21:58:35.021: INFO: stderr: ""
Jan 31 21:58:35.021: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 31 21:58:36.026: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:36.026: INFO: Found 0 / 1
Jan 31 21:58:37.028: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:37.028: INFO: Found 0 / 1
Jan 31 21:58:38.036: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:38.037: INFO: Found 0 / 1
Jan 31 21:58:39.028: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:39.028: INFO: Found 0 / 1
Jan 31 21:58:40.028: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:40.028: INFO: Found 0 / 1
Jan 31 21:58:41.062: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:41.062: INFO: Found 0 / 1
Jan 31 21:58:42.028: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:42.028: INFO: Found 0 / 1
Jan 31 21:58:43.028: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:43.028: INFO: Found 1 / 1
Jan 31 21:58:43.028: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 31 21:58:43.033: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 21:58:43.033: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 31 21:58:43.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-tq2rr --namespace=kubectl-5711'
Jan 31 21:58:43.213: INFO: stderr: ""
Jan 31 21:58:43.213: INFO: stdout: "Name:         agnhost-master-tq2rr\nNamespace:    kubectl-5711\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Fri, 31 Jan 2020 21:58:34 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.2\nIPs:\n  IP:           10.44.0.2\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://fee02678dc7ead8026471b1008bcee4e7fb5763d12411253a6f452f36ac30dd8\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 31 Jan 2020 21:58:41 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cxrsq (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-cxrsq:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-cxrsq\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-5711/agnhost-master-tq2rr to jerma-node\n  Normal  Pulled     5s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    2s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 31 21:58:43.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5711'
Jan 31 21:58:43.383: INFO: stderr: ""
Jan 31 21:58:43.383: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5711\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: agnhost-master-tq2rr\n"
Jan 31 21:58:43.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5711'
Jan 31 21:58:43.478: INFO: stderr: ""
Jan 31 21:58:43.478: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5711\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.26.128\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.2:6379\nSession Affinity:  None\nEvents:            \n"
Jan 31 21:58:43.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 31 21:58:43.639: INFO: stderr: ""
Jan 31 21:58:43.639: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Fri, 31 Jan 2020 21:58:39 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 31 Jan 2020 21:56:07 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 31 Jan 2020 21:56:07 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 31 Jan 2020 21:56:07 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 31 Jan 2020 21:56:07 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         27d\n  kubectl-5711                agnhost-master-tq2rr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 31 21:58:43.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5711'
Jan 31 21:58:43.733: INFO: stderr: ""
Jan 31 21:58:43.733: INFO: stdout: "Name:         kubectl-5711\nLabels:       e2e-framework=kubectl\n              e2e-run=84d426e0-3c7f-49d7-9b96-379ccbf45ea2\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:58:43.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5711" for this suite.

• [SLOW TEST:11.597 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":118,"skipped":1892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:58:43.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 31 21:58:43.837: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 21:58:43.859: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 21:58:43.872: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 21:58:43.900: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.900: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 21:58:43.900: INFO: agnhost-master-tq2rr from kubectl-5711 started at 2020-01-31 21:58:34 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.900: INFO: 	Container agnhost-master ready: true, restart count 0
Jan 31 21:58:43.900: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 21:58:43.900: INFO: 	Container weave ready: true, restart count 1
Jan 31 21:58:43.900: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 21:58:43.900: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 21:58:43.976: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.976: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 21:58:43.976: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.976: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 21:58:43.976: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 21:58:43.976: INFO: 	Container weave ready: true, restart count 0
Jan 31 21:58:43.976: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 21:58:43.976: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.976: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 31 21:58:43.976: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.976: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 21:58:43.976: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.976: INFO: 	Container etcd ready: true, restart count 1
Jan 31 21:58:43.976: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.976: INFO: 	Container coredns ready: true, restart count 0
Jan 31 21:58:43.976: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 21:58:43.976: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-9657fea8-1a9a-4fe4-94d6-76b037eb6418 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-9657fea8-1a9a-4fe4-94d6-76b037eb6418 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-9657fea8-1a9a-4fe4-94d6-76b037eb6418
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 21:59:16.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8325" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:32.629 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":119,"skipped":1933,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 21:59:16.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-afa10964-91db-45eb-8126-7cd788c8f8e2 in namespace container-probe-7816
Jan 31 21:59:30.535: INFO: Started pod liveness-afa10964-91db-45eb-8126-7cd788c8f8e2 in namespace container-probe-7816
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 21:59:30.545: INFO: Initial restart count of pod liveness-afa10964-91db-45eb-8126-7cd788c8f8e2 is 0
Jan 31 21:59:40.712: INFO: Restart count of pod container-probe-7816/liveness-afa10964-91db-45eb-8126-7cd788c8f8e2 is now 1 (10.167629148s elapsed)
Jan 31 22:00:00.779: INFO: Restart count of pod container-probe-7816/liveness-afa10964-91db-45eb-8126-7cd788c8f8e2 is now 2 (30.233748036s elapsed)
Jan 31 22:00:20.855: INFO: Restart count of pod container-probe-7816/liveness-afa10964-91db-45eb-8126-7cd788c8f8e2 is now 3 (50.310275532s elapsed)
Jan 31 22:00:41.181: INFO: Restart count of pod container-probe-7816/liveness-afa10964-91db-45eb-8126-7cd788c8f8e2 is now 4 (1m10.636330039s elapsed)
Jan 31 22:01:51.518: INFO: Restart count of pod container-probe-7816/liveness-afa10964-91db-45eb-8126-7cd788c8f8e2 is now 5 (2m20.97290942s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:01:51.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7816" for this suite.

• [SLOW TEST:155.240 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1942,"failed":0}
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:01:51.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 31 22:01:51.734: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:02:04.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5915" for this suite.

• [SLOW TEST:12.454 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":121,"skipped":1946,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:02:04.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:02:04.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075" in namespace "downward-api-3109" to be "success or failure"
Jan 31 22:02:04.172: INFO: Pod "downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111599ms
Jan 31 22:02:06.180: INFO: Pod "downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015900869s
Jan 31 22:02:08.186: INFO: Pod "downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021723552s
Jan 31 22:02:10.192: INFO: Pod "downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027159367s
Jan 31 22:02:12.197: INFO: Pod "downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032793474s
STEP: Saw pod success
Jan 31 22:02:12.197: INFO: Pod "downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075" satisfied condition "success or failure"
Jan 31 22:02:12.201: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075 container client-container: 
STEP: delete the pod
Jan 31 22:02:12.277: INFO: Waiting for pod downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075 to disappear
Jan 31 22:02:12.287: INFO: Pod downwardapi-volume-19adcf30-b9f8-4a4c-9924-f4b49725d075 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:02:12.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3109" for this suite.

• [SLOW TEST:8.233 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1966,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:02:12.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7682.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7682.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 22:02:24.610: INFO: DNS probes using dns-test-9c95be07-1029-4fb1-bb80-d9cb68713a3d succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7682.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7682.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 22:02:38.836: INFO: File wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local from pod  dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 22:02:38.842: INFO: File jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local from pod  dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 22:02:38.843: INFO: Lookups using dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 failed for: [wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local]

Jan 31 22:02:43.867: INFO: File wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local from pod  dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 22:02:43.877: INFO: File jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local from pod  dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 22:02:43.877: INFO: Lookups using dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 failed for: [wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local]

Jan 31 22:02:48.859: INFO: File wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local from pod  dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 22:02:48.868: INFO: File jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local from pod  dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 22:02:48.868: INFO: Lookups using dns-7682/dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 failed for: [wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local]

Jan 31 22:02:53.877: INFO: DNS probes using dns-test-39ee00d7-6f0d-4095-9f46-4ec12f945193 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7682.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7682.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7682.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7682.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 22:03:06.171: INFO: DNS probes using dns-test-13f9cc1a-d336-4ed0-8512-aa5c252a0fd2 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:03:06.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7682" for this suite.

• [SLOW TEST:53.940 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":123,"skipped":2045,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:03:06.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jan 31 22:03:06.385: INFO: Waiting up to 5m0s for pod "var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1" in namespace "var-expansion-2098" to be "success or failure"
Jan 31 22:03:06.480: INFO: Pod "var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1": Phase="Pending", Reason="", readiness=false. Elapsed: 94.089402ms
Jan 31 22:03:08.487: INFO: Pod "var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101362068s
Jan 31 22:03:10.494: INFO: Pod "var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109035149s
Jan 31 22:03:12.510: INFO: Pod "var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124178532s
Jan 31 22:03:14.521: INFO: Pod "var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135095112s
Jan 31 22:03:16.530: INFO: Pod "var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144488581s
STEP: Saw pod success
Jan 31 22:03:16.530: INFO: Pod "var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1" satisfied condition "success or failure"
Jan 31 22:03:16.537: INFO: Trying to get logs from node jerma-node pod var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1 container dapi-container: 
STEP: delete the pod
Jan 31 22:03:16.588: INFO: Waiting for pod var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1 to disappear
Jan 31 22:03:16.593: INFO: Pod var-expansion-7a2a891c-450b-4200-9837-91f9518d51e1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:03:16.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2098" for this suite.

• [SLOW TEST:10.362 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2073,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:03:16.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 31 22:03:16.756: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4838 /api/v1/namespaces/watch-4838/configmaps/e2e-watch-test-label-changed 86ab469d-2479-436d-a270-4e8e2e36b1b4 5606299 0 2020-01-31 22:03:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 22:03:16.756: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4838 /api/v1/namespaces/watch-4838/configmaps/e2e-watch-test-label-changed 86ab469d-2479-436d-a270-4e8e2e36b1b4 5606300 0 2020-01-31 22:03:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 31 22:03:16.756: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4838 /api/v1/namespaces/watch-4838/configmaps/e2e-watch-test-label-changed 86ab469d-2479-436d-a270-4e8e2e36b1b4 5606301 0 2020-01-31 22:03:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 31 22:03:26.818: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4838 /api/v1/namespaces/watch-4838/configmaps/e2e-watch-test-label-changed 86ab469d-2479-436d-a270-4e8e2e36b1b4 5606335 0 2020-01-31 22:03:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 22:03:26.818: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4838 /api/v1/namespaces/watch-4838/configmaps/e2e-watch-test-label-changed 86ab469d-2479-436d-a270-4e8e2e36b1b4 5606336 0 2020-01-31 22:03:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 31 22:03:26.818: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4838 /api/v1/namespaces/watch-4838/configmaps/e2e-watch-test-label-changed 86ab469d-2479-436d-a270-4e8e2e36b1b4 5606337 0 2020-01-31 22:03:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:03:26.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4838" for this suite.

• [SLOW TEST:10.265 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":125,"skipped":2076,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:03:26.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:03:26.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266" in namespace "projected-8247" to be "success or failure"
Jan 31 22:03:27.012: INFO: Pod "downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266": Phase="Pending", Reason="", readiness=false. Elapsed: 52.902446ms
Jan 31 22:03:29.017: INFO: Pod "downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057485435s
Jan 31 22:03:31.021: INFO: Pod "downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061732368s
Jan 31 22:03:33.025: INFO: Pod "downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066027963s
Jan 31 22:03:35.032: INFO: Pod "downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073042323s
Jan 31 22:03:37.040: INFO: Pod "downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081031032s
STEP: Saw pod success
Jan 31 22:03:37.040: INFO: Pod "downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266" satisfied condition "success or failure"
Jan 31 22:03:37.045: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266 container client-container: 
STEP: delete the pod
Jan 31 22:03:37.093: INFO: Waiting for pod downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266 to disappear
Jan 31 22:03:37.111: INFO: Pod downwardapi-volume-66402907-b9f7-44d7-bdb2-6702aa475266 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:03:37.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8247" for this suite.

• [SLOW TEST:10.248 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2098,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:03:37.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:03:38.188: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 31 22:03:40.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:03:42.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:03:44.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105018, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:03:47.276: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:03:47.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:03:48.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3959" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.675 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":127,"skipped":2117,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:03:48.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:03:48.937: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 31 22:03:48.957: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 31 22:03:53.972: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 22:03:57.982: INFO: Creating deployment "test-rolling-update-deployment"
Jan 31 22:03:57.989: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 31 22:03:58.012: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 31 22:04:00.032: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 31 22:04:00.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:04:02.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:04:04.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105038, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:04:06.042: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 31 22:04:06.051: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-7184 /apis/apps/v1/namespaces/deployment-7184/deployments/test-rolling-update-deployment 97e1bcd8-83e8-4aaa-813d-392309c1a422 5606563 1 2020-01-31 22:03:57 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002749588  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-31 22:03:58 +0000 UTC,LastTransitionTime:2020-01-31 22:03:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-31 22:04:05 +0000 UTC,LastTransitionTime:2020-01-31 22:03:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 31 22:04:06.053: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-7184 /apis/apps/v1/namespaces/deployment-7184/replicasets/test-rolling-update-deployment-67cf4f6444 6d157467-4649-49de-b3df-f40fefc9e3ef 5606550 1 2020-01-31 22:03:58 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 97e1bcd8-83e8-4aaa-813d-392309c1a422 0xc002749d27 0xc002749d28}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002749d98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 31 22:04:06.053: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 31 22:04:06.053: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-7184 /apis/apps/v1/namespaces/deployment-7184/replicasets/test-rolling-update-controller 59db83fa-c5ac-423d-9c17-1dd679a5fc6e 5606560 2 2020-01-31 22:03:48 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 97e1bcd8-83e8-4aaa-813d-392309c1a422 0xc002749c27 0xc002749c28}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002749cb8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 31 22:04:06.056: INFO: Pod "test-rolling-update-deployment-67cf4f6444-zg4kl" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-zg4kl test-rolling-update-deployment-67cf4f6444- deployment-7184 /api/v1/namespaces/deployment-7184/pods/test-rolling-update-deployment-67cf4f6444-zg4kl ced8f4aa-45b2-49ac-8abf-245032e2d52a 5606549 0 2020-01-31 22:03:58 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 6d157467-4649-49de-b3df-f40fefc9e3ef 0xc002aa1fe7 0xc002aa1fe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tf2jd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tf2jd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tf2jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-31 22:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:04:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://109bf19f8d45e16c355e06893e6428735237664525a7edb8c8a46be55198180b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:04:06.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7184" for this suite.

• [SLOW TEST:17.267 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":128,"skipped":2162,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:04:06.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-89dccb84-8a1f-4a38-b561-33faf841e99f
STEP: Creating a pod to test consume secrets
Jan 31 22:04:06.204: INFO: Waiting up to 5m0s for pod "pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a" in namespace "secrets-4702" to be "success or failure"
Jan 31 22:04:06.217: INFO: Pod "pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.600414ms
Jan 31 22:04:08.229: INFO: Pod "pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024914935s
Jan 31 22:04:10.236: INFO: Pod "pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031136424s
Jan 31 22:04:12.241: INFO: Pod "pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03600873s
Jan 31 22:04:14.246: INFO: Pod "pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041240764s
Jan 31 22:04:16.253: INFO: Pod "pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048554189s
STEP: Saw pod success
Jan 31 22:04:16.253: INFO: Pod "pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a" satisfied condition "success or failure"
Jan 31 22:04:16.258: INFO: Trying to get logs from node jerma-node pod pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a container secret-volume-test: 
STEP: delete the pod
Jan 31 22:04:16.359: INFO: Waiting for pod pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a to disappear
Jan 31 22:04:16.387: INFO: Pod pod-secrets-251e326c-c700-4b15-b75d-cfbe9519e53a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:04:16.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4702" for this suite.

• [SLOW TEST:10.339 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2188,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:04:16.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:04:16.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2807" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2211,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:04:16.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 31 22:04:16.780: INFO: Waiting up to 5m0s for pod "pod-c35a93a4-4035-485d-b27e-f6337fc7f442" in namespace "emptydir-730" to be "success or failure"
Jan 31 22:04:16.854: INFO: Pod "pod-c35a93a4-4035-485d-b27e-f6337fc7f442": Phase="Pending", Reason="", readiness=false. Elapsed: 74.034916ms
Jan 31 22:04:18.868: INFO: Pod "pod-c35a93a4-4035-485d-b27e-f6337fc7f442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087117434s
Jan 31 22:04:20.875: INFO: Pod "pod-c35a93a4-4035-485d-b27e-f6337fc7f442": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094881143s
Jan 31 22:04:22.883: INFO: Pod "pod-c35a93a4-4035-485d-b27e-f6337fc7f442": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103065931s
Jan 31 22:04:24.893: INFO: Pod "pod-c35a93a4-4035-485d-b27e-f6337fc7f442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112846484s
STEP: Saw pod success
Jan 31 22:04:24.894: INFO: Pod "pod-c35a93a4-4035-485d-b27e-f6337fc7f442" satisfied condition "success or failure"
Jan 31 22:04:24.898: INFO: Trying to get logs from node jerma-node pod pod-c35a93a4-4035-485d-b27e-f6337fc7f442 container test-container: 
STEP: delete the pod
Jan 31 22:04:24.938: INFO: Waiting for pod pod-c35a93a4-4035-485d-b27e-f6337fc7f442 to disappear
Jan 31 22:04:24.974: INFO: Pod pod-c35a93a4-4035-485d-b27e-f6337fc7f442 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:04:24.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-730" for this suite.

• [SLOW TEST:8.292 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2232,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:04:24.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:04:25.857: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:04:27.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105065, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105065, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105065, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105065, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:04:29.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105065, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105065, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105065, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105065, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:04:32.945: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:04:33.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9724" for this suite.
STEP: Destroying namespace "webhook-9724-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.484 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":132,"skipped":2248,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:04:33.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 31 22:04:33.628: INFO: Waiting up to 5m0s for pod "pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8" in namespace "emptydir-5163" to be "success or failure"
Jan 31 22:04:33.822: INFO: Pod "pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8": Phase="Pending", Reason="", readiness=false. Elapsed: 193.838006ms
Jan 31 22:04:35.830: INFO: Pod "pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20208787s
Jan 31 22:04:37.867: INFO: Pod "pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239285658s
Jan 31 22:04:39.877: INFO: Pod "pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249005769s
Jan 31 22:04:41.885: INFO: Pod "pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257201796s
Jan 31 22:04:43.893: INFO: Pod "pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.265108815s
STEP: Saw pod success
Jan 31 22:04:43.893: INFO: Pod "pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8" satisfied condition "success or failure"
Jan 31 22:04:43.896: INFO: Trying to get logs from node jerma-node pod pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8 container test-container: 
STEP: delete the pod
Jan 31 22:04:44.099: INFO: Waiting for pod pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8 to disappear
Jan 31 22:04:44.111: INFO: Pod pod-9bb9fe35-e10a-4ebf-95c0-8f6d35fa89f8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:04:44.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5163" for this suite.

• [SLOW TEST:10.653 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2248,"failed":0}
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:04:44.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Jan 31 22:04:45.053: INFO: created pod pod-service-account-defaultsa
Jan 31 22:04:45.053: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 31 22:04:45.064: INFO: created pod pod-service-account-mountsa
Jan 31 22:04:45.065: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 31 22:04:45.099: INFO: created pod pod-service-account-nomountsa
Jan 31 22:04:45.099: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 31 22:04:45.145: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 31 22:04:45.145: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 31 22:04:45.206: INFO: created pod pod-service-account-mountsa-mountspec
Jan 31 22:04:45.206: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 31 22:04:45.226: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 31 22:04:45.226: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 31 22:04:45.240: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 31 22:04:45.240: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 31 22:04:45.287: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 31 22:04:45.287: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 31 22:04:45.408: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 31 22:04:45.408: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:04:45.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7430" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":134,"skipped":2254,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:04:46.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:04:49.299: INFO: Create a RollingUpdate DaemonSet
Jan 31 22:04:49.303: INFO: Check that daemon pods launch on every node of the cluster
Jan 31 22:04:49.513: INFO: Number of nodes with available pods: 0
Jan 31 22:04:49.513: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:04:51.049: INFO: Number of nodes with available pods: 0
Jan 31 22:04:51.049: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:04:51.822: INFO: Number of nodes with available pods: 0
Jan 31 22:04:51.822: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:04:53.074: INFO: Number of nodes with available pods: 0
Jan 31 22:04:53.074: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:04:53.724: INFO: Number of nodes with available pods: 0
Jan 31 22:04:53.725: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:04:54.623: INFO: Number of nodes with available pods: 0
Jan 31 22:04:54.623: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:04:56.871: INFO: Number of nodes with available pods: 0
Jan 31 22:04:56.871: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:01.528: INFO: Number of nodes with available pods: 0
Jan 31 22:05:01.528: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:02.839: INFO: Number of nodes with available pods: 0
Jan 31 22:05:02.839: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:05.376: INFO: Number of nodes with available pods: 0
Jan 31 22:05:05.376: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:06.119: INFO: Number of nodes with available pods: 0
Jan 31 22:05:06.119: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:06.640: INFO: Number of nodes with available pods: 0
Jan 31 22:05:06.640: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:07.665: INFO: Number of nodes with available pods: 0
Jan 31 22:05:07.665: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:08.544: INFO: Number of nodes with available pods: 0
Jan 31 22:05:08.544: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:09.525: INFO: Number of nodes with available pods: 1
Jan 31 22:05:09.525: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:05:10.581: INFO: Number of nodes with available pods: 2
Jan 31 22:05:10.582: INFO: Number of running nodes: 2, number of available pods: 2
Jan 31 22:05:10.582: INFO: Update the DaemonSet to trigger a rollout
Jan 31 22:05:10.595: INFO: Updating DaemonSet daemon-set
Jan 31 22:05:22.624: INFO: Roll back the DaemonSet before rollout is complete
Jan 31 22:05:22.633: INFO: Updating DaemonSet daemon-set
Jan 31 22:05:22.634: INFO: Make sure DaemonSet rollback is complete
Jan 31 22:05:22.658: INFO: Wrong image for pod: daemon-set-f72m5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 31 22:05:22.658: INFO: Pod daemon-set-f72m5 is not available
Jan 31 22:05:23.703: INFO: Wrong image for pod: daemon-set-f72m5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 31 22:05:23.703: INFO: Pod daemon-set-f72m5 is not available
Jan 31 22:05:24.688: INFO: Wrong image for pod: daemon-set-f72m5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 31 22:05:24.688: INFO: Pod daemon-set-f72m5 is not available
Jan 31 22:05:25.707: INFO: Wrong image for pod: daemon-set-f72m5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 31 22:05:25.707: INFO: Pod daemon-set-f72m5 is not available
Jan 31 22:05:26.689: INFO: Wrong image for pod: daemon-set-f72m5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 31 22:05:26.689: INFO: Pod daemon-set-f72m5 is not available
Jan 31 22:05:27.690: INFO: Wrong image for pod: daemon-set-f72m5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 31 22:05:27.690: INFO: Pod daemon-set-f72m5 is not available
Jan 31 22:05:28.693: INFO: Pod daemon-set-d6ksz is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3685, will wait for the garbage collector to delete the pods
Jan 31 22:05:28.772: INFO: Deleting DaemonSet.extensions daemon-set took: 7.9743ms
Jan 31 22:05:29.172: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.837315ms
Jan 31 22:05:43.179: INFO: Number of nodes with available pods: 0
Jan 31 22:05:43.179: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 22:05:43.184: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3685/daemonsets","resourceVersion":"5607125"},"items":null}

Jan 31 22:05:43.187: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3685/pods","resourceVersion":"5607125"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:05:43.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3685" for this suite.

• [SLOW TEST:56.348 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":135,"skipped":2270,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:05:43.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-0134ade7-16f3-425c-9432-417657639d6e
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-0134ade7-16f3-425c-9432-417657639d6e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:07:00.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3068" for this suite.

• [SLOW TEST:77.337 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2281,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:07:00.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:07:00.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f" in namespace "downward-api-9808" to be "success or failure"
Jan 31 22:07:00.712: INFO: Pod "downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.180337ms
Jan 31 22:07:02.719: INFO: Pod "downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02789997s
Jan 31 22:07:04.727: INFO: Pod "downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035312102s
Jan 31 22:07:06.767: INFO: Pod "downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075815119s
Jan 31 22:07:08.774: INFO: Pod "downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083148543s
STEP: Saw pod success
Jan 31 22:07:08.775: INFO: Pod "downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f" satisfied condition "success or failure"
Jan 31 22:07:08.781: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f container client-container: 
STEP: delete the pod
Jan 31 22:07:08.822: INFO: Waiting for pod downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f to disappear
Jan 31 22:07:08.827: INFO: Pod downwardapi-volume-9e973346-3669-4e49-8bcd-16322b5d756f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:07:08.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9808" for this suite.

• [SLOW TEST:8.287 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:07:08.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-a344a106-5706-4622-84a8-4dc92e885214
STEP: Creating a pod to test consume configMaps
Jan 31 22:07:09.167: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588" in namespace "projected-9487" to be "success or failure"
Jan 31 22:07:09.180: INFO: Pod "pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588": Phase="Pending", Reason="", readiness=false. Elapsed: 12.646366ms
Jan 31 22:07:11.190: INFO: Pod "pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022566088s
Jan 31 22:07:13.198: INFO: Pod "pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030396782s
Jan 31 22:07:15.213: INFO: Pod "pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045146841s
Jan 31 22:07:17.218: INFO: Pod "pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05025277s
STEP: Saw pod success
Jan 31 22:07:17.218: INFO: Pod "pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588" satisfied condition "success or failure"
Jan 31 22:07:17.220: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 22:07:17.441: INFO: Waiting for pod pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588 to disappear
Jan 31 22:07:17.453: INFO: Pod pod-projected-configmaps-0a828fa9-c4db-4871-968a-8ac6da2ed588 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:07:17.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9487" for this suite.

• [SLOW TEST:8.623 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2382,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:07:17.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:07:17.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 31 22:07:20.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 create -f -'
Jan 31 22:07:22.960: INFO: stderr: ""
Jan 31 22:07:22.960: INFO: stdout: "e2e-test-crd-publish-openapi-4349-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 31 22:07:22.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 delete e2e-test-crd-publish-openapi-4349-crds test-cr'
Jan 31 22:07:23.093: INFO: stderr: ""
Jan 31 22:07:23.093: INFO: stdout: "e2e-test-crd-publish-openapi-4349-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jan 31 22:07:23.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 apply -f -'
Jan 31 22:07:23.378: INFO: stderr: ""
Jan 31 22:07:23.378: INFO: stdout: "e2e-test-crd-publish-openapi-4349-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 31 22:07:23.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1755 delete e2e-test-crd-publish-openapi-4349-crds test-cr'
Jan 31 22:07:23.577: INFO: stderr: ""
Jan 31 22:07:23.577: INFO: stdout: "e2e-test-crd-publish-openapi-4349-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jan 31 22:07:23.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4349-crds'
Jan 31 22:07:23.890: INFO: stderr: ""
Jan 31 22:07:23.890: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4349-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:07:27.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1755" for this suite.

• [SLOW TEST:9.925 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":139,"skipped":2398,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:07:27.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:07:58.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-631" for this suite.

• [SLOW TEST:31.367 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":140,"skipped":2410,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:07:58.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 31 22:07:58.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8505'
Jan 31 22:07:59.282: INFO: stderr: ""
Jan 31 22:07:59.282: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 22:07:59.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8505'
Jan 31 22:07:59.420: INFO: stderr: ""
Jan 31 22:07:59.420: INFO: stdout: "update-demo-nautilus-gtkhl update-demo-nautilus-rw9l8 "
Jan 31 22:07:59.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gtkhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8505'
Jan 31 22:07:59.580: INFO: stderr: ""
Jan 31 22:07:59.581: INFO: stdout: ""
Jan 31 22:07:59.581: INFO: update-demo-nautilus-gtkhl is created but not running
Jan 31 22:08:04.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8505'
Jan 31 22:08:05.426: INFO: stderr: ""
Jan 31 22:08:05.426: INFO: stdout: "update-demo-nautilus-gtkhl update-demo-nautilus-rw9l8 "
Jan 31 22:08:05.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gtkhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8505'
Jan 31 22:08:06.672: INFO: stderr: ""
Jan 31 22:08:06.672: INFO: stdout: ""
Jan 31 22:08:06.672: INFO: update-demo-nautilus-gtkhl is created but not running
Jan 31 22:08:11.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8505'
Jan 31 22:08:11.841: INFO: stderr: ""
Jan 31 22:08:11.842: INFO: stdout: "update-demo-nautilus-gtkhl update-demo-nautilus-rw9l8 "
Jan 31 22:08:11.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gtkhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8505'
Jan 31 22:08:12.018: INFO: stderr: ""
Jan 31 22:08:12.019: INFO: stdout: "true"
Jan 31 22:08:12.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gtkhl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8505'
Jan 31 22:08:12.161: INFO: stderr: ""
Jan 31 22:08:12.161: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:08:12.161: INFO: validating pod update-demo-nautilus-gtkhl
Jan 31 22:08:12.173: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:08:12.173: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:08:12.173: INFO: update-demo-nautilus-gtkhl is verified up and running
Jan 31 22:08:12.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw9l8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8505'
Jan 31 22:08:12.297: INFO: stderr: ""
Jan 31 22:08:12.297: INFO: stdout: "true"
Jan 31 22:08:12.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rw9l8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8505'
Jan 31 22:08:12.372: INFO: stderr: ""
Jan 31 22:08:12.372: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:08:12.373: INFO: validating pod update-demo-nautilus-rw9l8
Jan 31 22:08:12.380: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:08:12.380: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:08:12.380: INFO: update-demo-nautilus-rw9l8 is verified up and running
STEP: using delete to clean up resources
Jan 31 22:08:12.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8505'
Jan 31 22:08:12.516: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:08:12.516: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 31 22:08:12.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8505'
Jan 31 22:08:12.684: INFO: stderr: "No resources found in kubectl-8505 namespace.\n"
Jan 31 22:08:12.684: INFO: stdout: ""
Jan 31 22:08:12.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8505 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 22:08:12.882: INFO: stderr: ""
Jan 31 22:08:12.882: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:08:12.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8505" for this suite.

• [SLOW TEST:14.155 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":141,"skipped":2431,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:08:12.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-8ec44270-f28d-4803-87e5-b54f2cf660ea
STEP: Creating a pod to test consume secrets
Jan 31 22:08:13.568: INFO: Waiting up to 5m0s for pod "pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48" in namespace "secrets-1469" to be "success or failure"
Jan 31 22:08:13.951: INFO: Pod "pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 383.547566ms
Jan 31 22:08:16.231: INFO: Pod "pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.663737419s
Jan 31 22:08:18.236: INFO: Pod "pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.667965682s
Jan 31 22:08:20.243: INFO: Pod "pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675223145s
Jan 31 22:08:22.249: INFO: Pod "pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.681657708s
STEP: Saw pod success
Jan 31 22:08:22.250: INFO: Pod "pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48" satisfied condition "success or failure"
Jan 31 22:08:22.253: INFO: Trying to get logs from node jerma-node pod pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48 container secret-env-test: 
STEP: delete the pod
Jan 31 22:08:22.293: INFO: Waiting for pod pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48 to disappear
Jan 31 22:08:22.316: INFO: Pod pod-secrets-171f572c-c810-4950-8beb-22e863e5cd48 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:08:22.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1469" for this suite.

• [SLOW TEST:9.410 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2433,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:08:22.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:08:22.587: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 31 22:08:25.755: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:08:25.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9675" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":143,"skipped":2436,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:08:26.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Jan 31 22:08:38.795: INFO: Pod pod-hostip-1e21dc9b-6e16-45c3-96de-f980e48be8ce has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:08:38.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7809" for this suite.

• [SLOW TEST:12.312 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2438,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:08:38.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:08:38.933: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a" in namespace "downward-api-7619" to be "success or failure"
Jan 31 22:08:38.945: INFO: Pod "downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.826701ms
Jan 31 22:08:40.950: INFO: Pod "downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016217009s
Jan 31 22:08:42.954: INFO: Pod "downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020554594s
Jan 31 22:08:44.962: INFO: Pod "downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028182298s
Jan 31 22:08:46.968: INFO: Pod "downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034903732s
Jan 31 22:08:48.979: INFO: Pod "downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045471584s
STEP: Saw pod success
Jan 31 22:08:48.979: INFO: Pod "downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a" satisfied condition "success or failure"
Jan 31 22:08:48.985: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a container client-container: 
STEP: delete the pod
Jan 31 22:08:49.173: INFO: Waiting for pod downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a to disappear
Jan 31 22:08:49.185: INFO: Pod downwardapi-volume-43c076f1-2397-440c-8ada-ab6f8900971a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:08:49.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7619" for this suite.

• [SLOW TEST:10.374 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:08:49.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 31 22:08:55.503: INFO: &Pod{ObjectMeta:{send-events-02d3d58a-2435-4715-8c9f-790a7a94d00a  events-7091 /api/v1/namespaces/events-7091/pods/send-events-02d3d58a-2435-4715-8c9f-790a7a94d00a 34099fe7-c50f-4eb1-b874-f0364b11a6d5 5607907 0 2020-01-31 22:08:49 +0000 UTC   map[name:foo time:345984453] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9zwg8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9zwg8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9zwg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:08:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:08:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:08:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:08:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-31 22:08:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:08:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://84077002b1adc9fd2c092c0d036e3bfbdcbd1210d084469a9d4e939c591606bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 31 22:08:57.511: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 31 22:08:59.518: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:08:59.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7091" for this suite.

• [SLOW TEST:10.345 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":146,"skipped":2481,"failed":0}
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:08:59.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:09:06.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6765" for this suite.
STEP: Destroying namespace "nsdeletetest-7203" for this suite.
Jan 31 22:09:06.556: INFO: Namespace nsdeletetest-7203 was already deleted
STEP: Destroying namespace "nsdeletetest-5129" for this suite.

• [SLOW TEST:7.020 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":147,"skipped":2482,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:09:06.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:09:07.496: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:09:09.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:09:11.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:09:13.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105347, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:09:16.541: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:09:16.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-594-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:09:17.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9557" for this suite.
STEP: Destroying namespace "webhook-9557-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.426 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":148,"skipped":2484,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:09:17.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:09:18.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jan 31 22:09:18.323: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-31T22:09:18Z generation:1 name:name1 resourceVersion:5608071 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:51c90ea0-27b3-479f-bbbf-1811a3482fb9] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jan 31 22:09:28.356: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-31T22:09:28Z generation:1 name:name2 resourceVersion:5608110 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:de2b6c27-7094-4ed4-b65f-5e75a9924bd6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jan 31 22:09:38.366: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-31T22:09:18Z generation:2 name:name1 resourceVersion:5608135 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:51c90ea0-27b3-479f-bbbf-1811a3482fb9] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jan 31 22:09:48.378: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-31T22:09:28Z generation:2 name:name2 resourceVersion:5608159 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:de2b6c27-7094-4ed4-b65f-5e75a9924bd6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jan 31 22:09:58.392: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-31T22:09:18Z generation:2 name:name1 resourceVersion:5608183 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:51c90ea0-27b3-479f-bbbf-1811a3482fb9] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jan 31 22:10:08.405: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-31T22:09:28Z generation:2 name:name2 resourceVersion:5608207 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:de2b6c27-7094-4ed4-b65f-5e75a9924bd6] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:10:18.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2359" for this suite.

• [SLOW TEST:60.941 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":149,"skipped":2485,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:10:18.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 31 22:10:19.015: INFO: Waiting up to 5m0s for pod "pod-7719c9bb-15b2-4541-a594-9df6ff5b227a" in namespace "emptydir-8888" to be "success or failure"
Jan 31 22:10:19.081: INFO: Pod "pod-7719c9bb-15b2-4541-a594-9df6ff5b227a": Phase="Pending", Reason="", readiness=false. Elapsed: 65.693784ms
Jan 31 22:10:21.087: INFO: Pod "pod-7719c9bb-15b2-4541-a594-9df6ff5b227a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071555422s
Jan 31 22:10:23.093: INFO: Pod "pod-7719c9bb-15b2-4541-a594-9df6ff5b227a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078213569s
Jan 31 22:10:25.101: INFO: Pod "pod-7719c9bb-15b2-4541-a594-9df6ff5b227a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086035645s
Jan 31 22:10:27.108: INFO: Pod "pod-7719c9bb-15b2-4541-a594-9df6ff5b227a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092667295s
STEP: Saw pod success
Jan 31 22:10:27.108: INFO: Pod "pod-7719c9bb-15b2-4541-a594-9df6ff5b227a" satisfied condition "success or failure"
Jan 31 22:10:27.111: INFO: Trying to get logs from node jerma-node pod pod-7719c9bb-15b2-4541-a594-9df6ff5b227a container test-container: 
STEP: delete the pod
Jan 31 22:10:27.155: INFO: Waiting for pod pod-7719c9bb-15b2-4541-a594-9df6ff5b227a to disappear
Jan 31 22:10:27.160: INFO: Pod pod-7719c9bb-15b2-4541-a594-9df6ff5b227a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:10:27.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8888" for this suite.

• [SLOW TEST:8.234 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2489,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:10:27.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 31 22:10:27.231: INFO: Waiting up to 5m0s for pod "pod-9b56c9e5-e564-4020-8905-a0867692d3c1" in namespace "emptydir-2915" to be "success or failure"
Jan 31 22:10:27.293: INFO: Pod "pod-9b56c9e5-e564-4020-8905-a0867692d3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 62.000941ms
Jan 31 22:10:29.299: INFO: Pod "pod-9b56c9e5-e564-4020-8905-a0867692d3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067764915s
Jan 31 22:10:31.306: INFO: Pod "pod-9b56c9e5-e564-4020-8905-a0867692d3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075025695s
Jan 31 22:10:33.317: INFO: Pod "pod-9b56c9e5-e564-4020-8905-a0867692d3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08615669s
Jan 31 22:10:35.325: INFO: Pod "pod-9b56c9e5-e564-4020-8905-a0867692d3c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093431074s
STEP: Saw pod success
Jan 31 22:10:35.325: INFO: Pod "pod-9b56c9e5-e564-4020-8905-a0867692d3c1" satisfied condition "success or failure"
Jan 31 22:10:35.329: INFO: Trying to get logs from node jerma-node pod pod-9b56c9e5-e564-4020-8905-a0867692d3c1 container test-container: 
STEP: delete the pod
Jan 31 22:10:35.568: INFO: Waiting for pod pod-9b56c9e5-e564-4020-8905-a0867692d3c1 to disappear
Jan 31 22:10:35.663: INFO: Pod pod-9b56c9e5-e564-4020-8905-a0867692d3c1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:10:35.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2915" for this suite.

• [SLOW TEST:8.514 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2501,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:10:35.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6051
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6051
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6051
Jan 31 22:10:35.876: INFO: Found 0 stateful pods, waiting for 1
Jan 31 22:10:45.881: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 31 22:10:45.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6051 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 22:10:46.282: INFO: stderr: "I0131 22:10:46.035932    2058 log.go:172] (0xc000903a20) (0xc000ab28c0) Create stream\nI0131 22:10:46.036063    2058 log.go:172] (0xc000903a20) (0xc000ab28c0) Stream added, broadcasting: 1\nI0131 22:10:46.045394    2058 log.go:172] (0xc000903a20) Reply frame received for 1\nI0131 22:10:46.045462    2058 log.go:172] (0xc000903a20) (0xc00057e5a0) Create stream\nI0131 22:10:46.045480    2058 log.go:172] (0xc000903a20) (0xc00057e5a0) Stream added, broadcasting: 3\nI0131 22:10:46.046537    2058 log.go:172] (0xc000903a20) Reply frame received for 3\nI0131 22:10:46.046583    2058 log.go:172] (0xc000903a20) (0xc000225360) Create stream\nI0131 22:10:46.046594    2058 log.go:172] (0xc000903a20) (0xc000225360) Stream added, broadcasting: 5\nI0131 22:10:46.047583    2058 log.go:172] (0xc000903a20) Reply frame received for 5\nI0131 22:10:46.100399    2058 log.go:172] (0xc000903a20) Data frame received for 5\nI0131 22:10:46.100430    2058 log.go:172] (0xc000225360) (5) Data frame handling\nI0131 22:10:46.100444    2058 log.go:172] (0xc000225360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 22:10:46.187947    2058 log.go:172] (0xc000903a20) Data frame received for 3\nI0131 22:10:46.187997    2058 log.go:172] (0xc00057e5a0) (3) Data frame handling\nI0131 22:10:46.188024    2058 log.go:172] (0xc00057e5a0) (3) Data frame sent\nI0131 22:10:46.273493    2058 log.go:172] (0xc000903a20) Data frame received for 1\nI0131 22:10:46.273561    2058 log.go:172] (0xc000ab28c0) (1) Data frame handling\nI0131 22:10:46.273581    2058 log.go:172] (0xc000ab28c0) (1) Data frame sent\nI0131 22:10:46.273808    2058 log.go:172] (0xc000903a20) (0xc000ab28c0) Stream removed, broadcasting: 1\nI0131 22:10:46.273945    2058 log.go:172] (0xc000903a20) (0xc00057e5a0) Stream removed, broadcasting: 3\nI0131 22:10:46.274026    2058 log.go:172] (0xc000903a20) (0xc000225360) Stream removed, broadcasting: 5\nI0131 22:10:46.274063    2058 log.go:172] (0xc000903a20) Go away received\nI0131 22:10:46.274224    2058 log.go:172] (0xc000903a20) (0xc000ab28c0) Stream removed, broadcasting: 1\nI0131 22:10:46.274268    2058 log.go:172] (0xc000903a20) (0xc00057e5a0) Stream removed, broadcasting: 3\nI0131 22:10:46.274280    2058 log.go:172] (0xc000903a20) (0xc000225360) Stream removed, broadcasting: 5\n"
Jan 31 22:10:46.282: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 22:10:46.282: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 22:10:46.287: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 31 22:10:56.295: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 22:10:56.295: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 22:10:56.390: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999463s
Jan 31 22:10:57.396: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.926296577s
Jan 31 22:10:58.403: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.920381779s
Jan 31 22:10:59.411: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.913343648s
Jan 31 22:11:00.420: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.90488892s
Jan 31 22:11:01.427: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.896368232s
Jan 31 22:11:02.434: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.8889669s
Jan 31 22:11:03.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.882334305s
Jan 31 22:11:04.451: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.874990047s
Jan 31 22:11:05.460: INFO: Verifying statefulset ss doesn't scale past 1 for another 865.064989ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6051
Jan 31 22:11:06.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6051 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 22:11:06.885: INFO: stderr: "I0131 22:11:06.668016    2078 log.go:172] (0xc000118c60) (0xc0006a7a40) Create stream\nI0131 22:11:06.668143    2078 log.go:172] (0xc000118c60) (0xc0006a7a40) Stream added, broadcasting: 1\nI0131 22:11:06.670591    2078 log.go:172] (0xc000118c60) Reply frame received for 1\nI0131 22:11:06.670630    2078 log.go:172] (0xc000118c60) (0xc0006a8000) Create stream\nI0131 22:11:06.670656    2078 log.go:172] (0xc000118c60) (0xc0006a8000) Stream added, broadcasting: 3\nI0131 22:11:06.671732    2078 log.go:172] (0xc000118c60) Reply frame received for 3\nI0131 22:11:06.671755    2078 log.go:172] (0xc000118c60) (0xc0006a8140) Create stream\nI0131 22:11:06.671762    2078 log.go:172] (0xc000118c60) (0xc0006a8140) Stream added, broadcasting: 5\nI0131 22:11:06.673016    2078 log.go:172] (0xc000118c60) Reply frame received for 5\nI0131 22:11:06.740971    2078 log.go:172] (0xc000118c60) Data frame received for 5\nI0131 22:11:06.741022    2078 log.go:172] (0xc0006a8140) (5) Data frame handling\nI0131 22:11:06.741047    2078 log.go:172] (0xc0006a8140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 22:11:06.742811    2078 log.go:172] (0xc000118c60) Data frame received for 3\nI0131 22:11:06.742830    2078 log.go:172] (0xc0006a8000) (3) Data frame handling\nI0131 22:11:06.742840    2078 log.go:172] (0xc0006a8000) (3) Data frame sent\nI0131 22:11:06.860398    2078 log.go:172] (0xc000118c60) Data frame received for 1\nI0131 22:11:06.861449    2078 log.go:172] (0xc0006a7a40) (1) Data frame handling\nI0131 22:11:06.861668    2078 log.go:172] (0xc0006a7a40) (1) Data frame sent\nI0131 22:11:06.865376    2078 log.go:172] (0xc000118c60) (0xc0006a7a40) Stream removed, broadcasting: 1\nI0131 22:11:06.866239    2078 log.go:172] (0xc000118c60) (0xc0006a8140) Stream removed, broadcasting: 5\nI0131 22:11:06.866696    2078 log.go:172] (0xc000118c60) (0xc0006a8000) Stream removed, broadcasting: 3\nI0131 22:11:06.867423    2078 log.go:172] (0xc000118c60) (0xc0006a7a40) Stream removed, broadcasting: 1\nI0131 22:11:06.867482    2078 log.go:172] (0xc000118c60) (0xc0006a8000) Stream removed, broadcasting: 3\nI0131 22:11:06.867515    2078 log.go:172] (0xc000118c60) (0xc0006a8140) Stream removed, broadcasting: 5\n"
Jan 31 22:11:06.885: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 22:11:06.885: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 22:11:06.903: INFO: Found 2 stateful pods, waiting for 3
Jan 31 22:11:16.937: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:11:16.937: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:11:16.937: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 22:11:26.915: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:11:26.915: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:11:26.915: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 31 22:11:26.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6051 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 22:11:27.429: INFO: stderr: "I0131 22:11:27.226353    2095 log.go:172] (0xc0003c20b0) (0xc0006ae0a0) Create stream\nI0131 22:11:27.226741    2095 log.go:172] (0xc0003c20b0) (0xc0006ae0a0) Stream added, broadcasting: 1\nI0131 22:11:27.229577    2095 log.go:172] (0xc0003c20b0) Reply frame received for 1\nI0131 22:11:27.229641    2095 log.go:172] (0xc0003c20b0) (0xc0008b6000) Create stream\nI0131 22:11:27.229660    2095 log.go:172] (0xc0003c20b0) (0xc0008b6000) Stream added, broadcasting: 3\nI0131 22:11:27.231840    2095 log.go:172] (0xc0003c20b0) Reply frame received for 3\nI0131 22:11:27.231871    2095 log.go:172] (0xc0003c20b0) (0xc0008b60a0) Create stream\nI0131 22:11:27.231881    2095 log.go:172] (0xc0003c20b0) (0xc0008b60a0) Stream added, broadcasting: 5\nI0131 22:11:27.233646    2095 log.go:172] (0xc0003c20b0) Reply frame received for 5\nI0131 22:11:27.340237    2095 log.go:172] (0xc0003c20b0) Data frame received for 3\nI0131 22:11:27.340345    2095 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0131 22:11:27.340386    2095 log.go:172] (0xc0008b6000) (3) Data frame sent\nI0131 22:11:27.340421    2095 log.go:172] (0xc0003c20b0) Data frame received for 5\nI0131 22:11:27.340446    2095 log.go:172] (0xc0008b60a0) (5) Data frame handling\nI0131 22:11:27.340483    2095 log.go:172] (0xc0008b60a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 22:11:27.412817    2095 log.go:172] (0xc0003c20b0) Data frame received for 1\nI0131 22:11:27.413090    2095 log.go:172] (0xc0006ae0a0) (1) Data frame handling\nI0131 22:11:27.413152    2095 log.go:172] (0xc0006ae0a0) (1) Data frame sent\nI0131 22:11:27.413335    2095 log.go:172] (0xc0003c20b0) (0xc0008b6000) Stream removed, broadcasting: 3\nI0131 22:11:27.413508    2095 log.go:172] (0xc0003c20b0) (0xc0006ae0a0) Stream removed, broadcasting: 1\nI0131 22:11:27.413708    2095 log.go:172] (0xc0003c20b0) (0xc0008b60a0) Stream removed, broadcasting: 5\nI0131 22:11:27.414716    2095 log.go:172] (0xc0003c20b0) Go away received\nI0131 22:11:27.415229    2095 log.go:172] (0xc0003c20b0) (0xc0006ae0a0) Stream removed, broadcasting: 1\nI0131 22:11:27.415350    2095 log.go:172] (0xc0003c20b0) (0xc0008b6000) Stream removed, broadcasting: 3\nI0131 22:11:27.415443    2095 log.go:172] (0xc0003c20b0) (0xc0008b60a0) Stream removed, broadcasting: 5\n"
Jan 31 22:11:27.429: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 22:11:27.429: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 22:11:27.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6051 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 22:11:27.871: INFO: stderr: "I0131 22:11:27.613968    2117 log.go:172] (0xc0009f3340) (0xc0009de3c0) Create stream\nI0131 22:11:27.614157    2117 log.go:172] (0xc0009f3340) (0xc0009de3c0) Stream added, broadcasting: 1\nI0131 22:11:27.625445    2117 log.go:172] (0xc0009f3340) Reply frame received for 1\nI0131 22:11:27.625528    2117 log.go:172] (0xc0009f3340) (0xc0006d46e0) Create stream\nI0131 22:11:27.625566    2117 log.go:172] (0xc0009f3340) (0xc0006d46e0) Stream added, broadcasting: 3\nI0131 22:11:27.626495    2117 log.go:172] (0xc0009f3340) Reply frame received for 3\nI0131 22:11:27.626529    2117 log.go:172] (0xc0009f3340) (0xc0004854a0) Create stream\nI0131 22:11:27.626563    2117 log.go:172] (0xc0009f3340) (0xc0004854a0) Stream added, broadcasting: 5\nI0131 22:11:27.627450    2117 log.go:172] (0xc0009f3340) Reply frame received for 5\nI0131 22:11:27.694919    2117 log.go:172] (0xc0009f3340) Data frame received for 5\nI0131 22:11:27.694972    2117 log.go:172] (0xc0004854a0) (5) Data frame handling\nI0131 22:11:27.694990    2117 log.go:172] (0xc0004854a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 22:11:27.739694    2117 log.go:172] (0xc0009f3340) Data frame received for 3\nI0131 22:11:27.739754    2117 log.go:172] (0xc0006d46e0) (3) Data frame handling\nI0131 22:11:27.739789    2117 log.go:172] (0xc0006d46e0) (3) Data frame sent\nI0131 22:11:27.858361    2117 log.go:172] (0xc0009f3340) (0xc0004854a0) Stream removed, broadcasting: 5\nI0131 22:11:27.858641    2117 log.go:172] (0xc0009f3340) Data frame received for 1\nI0131 22:11:27.858745    2117 log.go:172] (0xc0009f3340) (0xc0006d46e0) Stream removed, broadcasting: 3\nI0131 22:11:27.858838    2117 log.go:172] (0xc0009de3c0) (1) Data frame handling\nI0131 22:11:27.858864    2117 log.go:172] (0xc0009de3c0) (1) Data frame sent\nI0131 22:11:27.858880    2117 log.go:172] (0xc0009f3340) (0xc0009de3c0) Stream removed, broadcasting: 1\nI0131 22:11:27.858909    2117 log.go:172] (0xc0009f3340) Go away received\nI0131 22:11:27.860201    2117 log.go:172] (0xc0009f3340) (0xc0009de3c0) Stream removed, broadcasting: 1\nI0131 22:11:27.860260    2117 log.go:172] (0xc0009f3340) (0xc0006d46e0) Stream removed, broadcasting: 3\nI0131 22:11:27.860272    2117 log.go:172] (0xc0009f3340) (0xc0004854a0) Stream removed, broadcasting: 5\n"
Jan 31 22:11:27.871: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 22:11:27.871: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 22:11:27.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6051 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 22:11:28.245: INFO: stderr: "I0131 22:11:28.039101    2138 log.go:172] (0xc000b30580) (0xc000659cc0) Create stream\nI0131 22:11:28.039216    2138 log.go:172] (0xc000b30580) (0xc000659cc0) Stream added, broadcasting: 1\nI0131 22:11:28.041631    2138 log.go:172] (0xc000b30580) Reply frame received for 1\nI0131 22:11:28.041670    2138 log.go:172] (0xc000b30580) (0xc000bc60a0) Create stream\nI0131 22:11:28.041685    2138 log.go:172] (0xc000b30580) (0xc000bc60a0) Stream added, broadcasting: 3\nI0131 22:11:28.042766    2138 log.go:172] (0xc000b30580) Reply frame received for 3\nI0131 22:11:28.042791    2138 log.go:172] (0xc000b30580) (0xc000659d60) Create stream\nI0131 22:11:28.042804    2138 log.go:172] (0xc000b30580) (0xc000659d60) Stream added, broadcasting: 5\nI0131 22:11:28.045759    2138 log.go:172] (0xc000b30580) Reply frame received for 5\nI0131 22:11:28.123516    2138 log.go:172] (0xc000b30580) Data frame received for 5\nI0131 22:11:28.123556    2138 log.go:172] (0xc000659d60) (5) Data frame handling\nI0131 22:11:28.123571    2138 log.go:172] (0xc000659d60) (5) Data frame sent\n+ I0131 22:11:28.124918    2138 log.go:172] (0xc000b30580) Data frame received for 5\nI0131 22:11:28.124962    2138 log.go:172] (0xc000659d60) (5) Data frame handling\nI0131 22:11:28.124982    2138 log.go:172] (0xc000659d60) (5) Data frame sent\nmv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 22:11:28.157727    2138 log.go:172] (0xc000b30580) Data frame received for 3\nI0131 22:11:28.157763    2138 log.go:172] (0xc000bc60a0) (3) Data frame handling\nI0131 22:11:28.157782    2138 log.go:172] (0xc000bc60a0) (3) Data frame sent\nI0131 22:11:28.232886    2138 log.go:172] (0xc000b30580) Data frame received for 1\nI0131 22:11:28.232995    2138 log.go:172] (0xc000b30580) (0xc000bc60a0) Stream removed, broadcasting: 3\nI0131 22:11:28.233050    2138 log.go:172] (0xc000659cc0) (1) Data frame handling\nI0131 22:11:28.233076    2138 log.go:172] (0xc000659cc0) (1) Data frame sent\nI0131 22:11:28.233090    2138 log.go:172] (0xc000b30580) (0xc000659cc0) Stream removed, broadcasting: 1\nI0131 22:11:28.237373    2138 log.go:172] (0xc000b30580) (0xc000659d60) Stream removed, broadcasting: 5\nI0131 22:11:28.237770    2138 log.go:172] (0xc000b30580) (0xc000659cc0) Stream removed, broadcasting: 1\nI0131 22:11:28.237860    2138 log.go:172] (0xc000b30580) (0xc000bc60a0) Stream removed, broadcasting: 3\nI0131 22:11:28.237880    2138 log.go:172] (0xc000b30580) (0xc000659d60) Stream removed, broadcasting: 5\n"
Jan 31 22:11:28.245: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 22:11:28.245: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 22:11:28.245: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 22:11:28.252: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 31 22:11:38.269: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 22:11:38.269: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 22:11:38.269: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 22:11:38.303: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998586s
Jan 31 22:11:39.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984908337s
Jan 31 22:11:40.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.976047557s
Jan 31 22:11:41.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.965641564s
Jan 31 22:11:42.336: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958303189s
Jan 31 22:11:43.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.951696114s
Jan 31 22:11:44.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.944588169s
Jan 31 22:11:45.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.938116254s
Jan 31 22:11:46.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.926531124s
Jan 31 22:11:47.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 917.817497ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6051
Jan 31 22:11:48.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6051 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 22:11:48.868: INFO: stderr: "I0131 22:11:48.662473    2157 log.go:172] (0xc0000f5b80) (0xc000bf4280) Create stream\nI0131 22:11:48.662947    2157 log.go:172] (0xc0000f5b80) (0xc000bf4280) Stream added, broadcasting: 1\nI0131 22:11:48.668013    2157 log.go:172] (0xc0000f5b80) Reply frame received for 1\nI0131 22:11:48.668116    2157 log.go:172] (0xc0000f5b80) (0xc000a961e0) Create stream\nI0131 22:11:48.668130    2157 log.go:172] (0xc0000f5b80) (0xc000a961e0) Stream added, broadcasting: 3\nI0131 22:11:48.669291    2157 log.go:172] (0xc0000f5b80) Reply frame received for 3\nI0131 22:11:48.669319    2157 log.go:172] (0xc0000f5b80) (0xc000bf4320) Create stream\nI0131 22:11:48.669329    2157 log.go:172] (0xc0000f5b80) (0xc000bf4320) Stream added, broadcasting: 5\nI0131 22:11:48.670753    2157 log.go:172] (0xc0000f5b80) Reply frame received for 5\nI0131 22:11:48.772809    2157 log.go:172] (0xc0000f5b80) Data frame received for 3\nI0131 22:11:48.773012    2157 log.go:172] (0xc000a961e0) (3) Data frame handling\nI0131 22:11:48.773052    2157 log.go:172] (0xc000a961e0) (3) Data frame sent\nI0131 22:11:48.773114    2157 log.go:172] (0xc0000f5b80) Data frame received for 5\nI0131 22:11:48.773165    2157 log.go:172] (0xc000bf4320) (5) Data frame handling\nI0131 22:11:48.773208    2157 log.go:172] (0xc000bf4320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 22:11:48.857811    2157 log.go:172] (0xc0000f5b80) (0xc000a961e0) Stream removed, broadcasting: 3\nI0131 22:11:48.858217    2157 log.go:172] (0xc0000f5b80) Data frame received for 1\nI0131 22:11:48.858268    2157 log.go:172] (0xc000bf4280) (1) Data frame handling\nI0131 22:11:48.858301    2157 log.go:172] (0xc000bf4280) (1) Data frame sent\nI0131 22:11:48.858340    2157 log.go:172] (0xc0000f5b80) (0xc000bf4280) Stream removed, broadcasting: 1\nI0131 22:11:48.859116    2157 log.go:172] (0xc0000f5b80) (0xc000bf4320) Stream removed, broadcasting: 5\nI0131 22:11:48.859278    2157 log.go:172] (0xc0000f5b80) Go away received\nI0131 22:11:48.859355    2157 log.go:172] (0xc0000f5b80) (0xc000bf4280) Stream removed, broadcasting: 1\nI0131 22:11:48.859426    2157 log.go:172] (0xc0000f5b80) (0xc000a961e0) Stream removed, broadcasting: 3\nI0131 22:11:48.859491    2157 log.go:172] (0xc0000f5b80) (0xc000bf4320) Stream removed, broadcasting: 5\n"
Jan 31 22:11:48.869: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 22:11:48.869: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 22:11:48.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6051 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 22:11:49.217: INFO: stderr: "I0131 22:11:49.018200    2178 log.go:172] (0xc000a24630) (0xc0003974a0) Create stream\nI0131 22:11:49.018341    2178 log.go:172] (0xc000a24630) (0xc0003974a0) Stream added, broadcasting: 1\nI0131 22:11:49.021134    2178 log.go:172] (0xc000a24630) Reply frame received for 1\nI0131 22:11:49.021171    2178 log.go:172] (0xc000a24630) (0xc0009d6000) Create stream\nI0131 22:11:49.021179    2178 log.go:172] (0xc000a24630) (0xc0009d6000) Stream added, broadcasting: 3\nI0131 22:11:49.022357    2178 log.go:172] (0xc000a24630) Reply frame received for 3\nI0131 22:11:49.022380    2178 log.go:172] (0xc000a24630) (0xc0008de000) Create stream\nI0131 22:11:49.022392    2178 log.go:172] (0xc000a24630) (0xc0008de000) Stream added, broadcasting: 5\nI0131 22:11:49.023261    2178 log.go:172] (0xc000a24630) Reply frame received for 5\nI0131 22:11:49.118243    2178 log.go:172] (0xc000a24630) Data frame received for 3\nI0131 22:11:49.118325    2178 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0131 22:11:49.118360    2178 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0131 22:11:49.118404    2178 log.go:172] (0xc000a24630) Data frame received for 5\nI0131 22:11:49.118422    2178 log.go:172] (0xc0008de000) (5) Data frame handling\nI0131 22:11:49.118440    2178 log.go:172] (0xc0008de000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 22:11:49.204870    2178 log.go:172] (0xc000a24630) Data frame received for 1\nI0131 22:11:49.205160    2178 log.go:172] (0xc0003974a0) (1) Data frame handling\nI0131 22:11:49.205191    2178 log.go:172] (0xc0003974a0) (1) Data frame sent\nI0131 22:11:49.205391    2178 log.go:172] (0xc000a24630) (0xc0003974a0) Stream removed, broadcasting: 1\nI0131 22:11:49.205512    2178 log.go:172] (0xc000a24630) (0xc0009d6000) Stream removed, broadcasting: 3\nI0131 22:11:49.207020    2178 log.go:172] (0xc000a24630) (0xc0008de000) Stream removed, broadcasting: 5\nI0131 22:11:49.207129    2178 log.go:172] (0xc000a24630) (0xc0003974a0) Stream removed, broadcasting: 1\nI0131 22:11:49.207159    2178 log.go:172] (0xc000a24630) (0xc0009d6000) Stream removed, broadcasting: 3\nI0131 22:11:49.207183    2178 log.go:172] (0xc000a24630) (0xc0008de000) Stream removed, broadcasting: 5\n"
Jan 31 22:11:49.217: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 22:11:49.217: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 22:11:49.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6051 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 22:11:49.586: INFO: stderr: "I0131 22:11:49.385688    2198 log.go:172] (0xc000b60dc0) (0xc000aae460) Create stream\nI0131 22:11:49.386098    2198 log.go:172] (0xc000b60dc0) (0xc000aae460) Stream added, broadcasting: 1\nI0131 22:11:49.390913    2198 log.go:172] (0xc000b60dc0) Reply frame received for 1\nI0131 22:11:49.391033    2198 log.go:172] (0xc000b60dc0) (0xc000b760a0) Create stream\nI0131 22:11:49.391047    2198 log.go:172] (0xc000b60dc0) (0xc000b760a0) Stream added, broadcasting: 3\nI0131 22:11:49.393587    2198 log.go:172] (0xc000b60dc0) Reply frame received for 3\nI0131 22:11:49.393719    2198 log.go:172] (0xc000b60dc0) (0xc000aea0a0) Create stream\nI0131 22:11:49.393752    2198 log.go:172] (0xc000b60dc0) (0xc000aea0a0) Stream added, broadcasting: 5\nI0131 22:11:49.395466    2198 log.go:172] (0xc000b60dc0) Reply frame received for 5\nI0131 22:11:49.478534    2198 log.go:172] (0xc000b60dc0) Data frame received for 5\nI0131 22:11:49.478681    2198 log.go:172] (0xc000aea0a0) (5) Data frame handling\nI0131 22:11:49.478709    2198 log.go:172] (0xc000aea0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 22:11:49.478736    2198 log.go:172] (0xc000b60dc0) Data frame received for 3\nI0131 22:11:49.478744    2198 log.go:172] (0xc000b760a0) (3) Data frame handling\nI0131 22:11:49.478755    2198 log.go:172] (0xc000b760a0) (3) Data frame sent\nI0131 22:11:49.578053    2198 log.go:172] (0xc000b60dc0) (0xc000b760a0) Stream removed, broadcasting: 3\nI0131 22:11:49.578236    2198 log.go:172] (0xc000b60dc0) Data frame received for 1\nI0131 22:11:49.578263    2198 log.go:172] (0xc000b60dc0) (0xc000aea0a0) Stream removed, broadcasting: 5\nI0131 22:11:49.578353    2198 log.go:172] (0xc000aae460) (1) Data frame handling\nI0131 22:11:49.578380    2198 log.go:172] (0xc000aae460) (1) Data frame sent\nI0131 22:11:49.578396    2198 log.go:172] (0xc000b60dc0) (0xc000aae460) Stream removed, broadcasting: 1\nI0131 22:11:49.578419    2198 log.go:172] (0xc000b60dc0) Go away received\nI0131 22:11:49.579440    2198 log.go:172] (0xc000b60dc0) (0xc000aae460) Stream removed, broadcasting: 1\nI0131 22:11:49.579460    2198 log.go:172] (0xc000b60dc0) (0xc000b760a0) Stream removed, broadcasting: 3\nI0131 22:11:49.579480    2198 log.go:172] (0xc000b60dc0) (0xc000aea0a0) Stream removed, broadcasting: 5\n"
Jan 31 22:11:49.586: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 22:11:49.586: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 22:11:49.586: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 31 22:12:19.615: INFO: Deleting all statefulset in ns statefulset-6051
Jan 31 22:12:19.623: INFO: Scaling statefulset ss to 0
Jan 31 22:12:19.637: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 22:12:19.640: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:12:19.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6051" for this suite.

• [SLOW TEST:104.014 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":152,"skipped":2509,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:12:19.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-18ef6d68-602d-44d3-aaaf-b65ddddbf4ff
STEP: Creating secret with name secret-projected-all-test-volume-8f46a056-e28b-4e0b-9663-89d35680480c
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 31 22:12:19.827: INFO: Waiting up to 5m0s for pod "projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f" in namespace "projected-1362" to be "success or failure"
Jan 31 22:12:19.839: INFO: Pod "projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.671176ms
Jan 31 22:12:21.851: INFO: Pod "projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023962666s
Jan 31 22:12:23.865: INFO: Pod "projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038032237s
Jan 31 22:12:25.871: INFO: Pod "projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043760679s
STEP: Saw pod success
Jan 31 22:12:25.871: INFO: Pod "projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f" satisfied condition "success or failure"
Jan 31 22:12:25.874: INFO: Trying to get logs from node jerma-node pod projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f container projected-all-volume-test: 
STEP: delete the pod
Jan 31 22:12:25.998: INFO: Waiting for pod projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f to disappear
Jan 31 22:12:26.003: INFO: Pod projected-volume-b3a234c4-9493-4813-a1c5-cf3a8052857f no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:12:26.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1362" for this suite.

• [SLOW TEST:6.321 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2522,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:12:26.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6945
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 22:12:26.075: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 22:13:00.417: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:13:00.417: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:13:00.488280       8 log.go:172] (0xc0026d2630) (0xc0015ec6e0) Create stream
I0131 22:13:00.488453       8 log.go:172] (0xc0026d2630) (0xc0015ec6e0) Stream added, broadcasting: 1
I0131 22:13:00.494683       8 log.go:172] (0xc0026d2630) Reply frame received for 1
I0131 22:13:00.494824       8 log.go:172] (0xc0026d2630) (0xc0015ec8c0) Create stream
I0131 22:13:00.494874       8 log.go:172] (0xc0026d2630) (0xc0015ec8c0) Stream added, broadcasting: 3
I0131 22:13:00.496547       8 log.go:172] (0xc0026d2630) Reply frame received for 3
I0131 22:13:00.496579       8 log.go:172] (0xc0026d2630) (0xc0016d7b80) Create stream
I0131 22:13:00.496590       8 log.go:172] (0xc0026d2630) (0xc0016d7b80) Stream added, broadcasting: 5
I0131 22:13:00.497924       8 log.go:172] (0xc0026d2630) Reply frame received for 5
I0131 22:13:01.678598       8 log.go:172] (0xc0026d2630) Data frame received for 3
I0131 22:13:01.678670       8 log.go:172] (0xc0015ec8c0) (3) Data frame handling
I0131 22:13:01.678701       8 log.go:172] (0xc0015ec8c0) (3) Data frame sent
I0131 22:13:01.802736       8 log.go:172] (0xc0026d2630) Data frame received for 1
I0131 22:13:01.802979       8 log.go:172] (0xc0026d2630) (0xc0015ec8c0) Stream removed, broadcasting: 3
I0131 22:13:01.803148       8 log.go:172] (0xc0015ec6e0) (1) Data frame handling
I0131 22:13:01.803191       8 log.go:172] (0xc0015ec6e0) (1) Data frame sent
I0131 22:13:01.803210       8 log.go:172] (0xc0026d2630) (0xc0015ec6e0) Stream removed, broadcasting: 1
I0131 22:13:01.806451       8 log.go:172] (0xc0026d2630) (0xc0016d7b80) Stream removed, broadcasting: 5
I0131 22:13:01.806520       8 log.go:172] (0xc0026d2630) (0xc0015ec6e0) Stream removed, broadcasting: 1
I0131 22:13:01.806532       8 log.go:172] (0xc0026d2630) (0xc0015ec8c0) Stream removed, broadcasting: 3
I0131 22:13:01.806611       8 log.go:172] (0xc0026d2630) (0xc0016d7b80) Stream removed, broadcasting: 5
Jan 31 22:13:01.807: INFO: Found all expected endpoints: [netserver-0]
Jan 31 22:13:01.821: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:13:01.821: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:13:01.901011       8 log.go:172] (0xc0029fa630) (0xc001118960) Create stream
I0131 22:13:01.901185       8 log.go:172] (0xc0029fa630) (0xc001118960) Stream added, broadcasting: 1
I0131 22:13:01.906664       8 log.go:172] (0xc0029fa630) Reply frame received for 1
I0131 22:13:01.906786       8 log.go:172] (0xc0029fa630) (0xc001d7cc80) Create stream
I0131 22:13:01.906806       8 log.go:172] (0xc0029fa630) (0xc001d7cc80) Stream added, broadcasting: 3
I0131 22:13:01.909620       8 log.go:172] (0xc0029fa630) Reply frame received for 3
I0131 22:13:01.909675       8 log.go:172] (0xc0029fa630) (0xc001118aa0) Create stream
I0131 22:13:01.909708       8 log.go:172] (0xc0029fa630) (0xc001118aa0) Stream added, broadcasting: 5
I0131 22:13:01.913292       8 log.go:172] (0xc0029fa630) Reply frame received for 5
I0131 22:13:03.024393       8 log.go:172] (0xc0029fa630) Data frame received for 3
I0131 22:13:03.024475       8 log.go:172] (0xc001d7cc80) (3) Data frame handling
I0131 22:13:03.024503       8 log.go:172] (0xc001d7cc80) (3) Data frame sent
I0131 22:13:03.108626       8 log.go:172] (0xc0029fa630) (0xc001118aa0) Stream removed, broadcasting: 5
I0131 22:13:03.108764       8 log.go:172] (0xc0029fa630) Data frame received for 1
I0131 22:13:03.108793       8 log.go:172] (0xc0029fa630) (0xc001d7cc80) Stream removed, broadcasting: 3
I0131 22:13:03.108826       8 log.go:172] (0xc001118960) (1) Data frame handling
I0131 22:13:03.108842       8 log.go:172] (0xc001118960) (1) Data frame sent
I0131 22:13:03.108868       8 log.go:172] (0xc0029fa630) (0xc001118960) Stream removed, broadcasting: 1
I0131 22:13:03.108902       8 log.go:172] (0xc0029fa630) Go away received
I0131 22:13:03.109250       8 log.go:172] (0xc0029fa630) (0xc001118960) Stream removed, broadcasting: 1
I0131 22:13:03.109330       8 log.go:172] (0xc0029fa630) (0xc001d7cc80) Stream removed, broadcasting: 3
I0131 22:13:03.109347       8 log.go:172] (0xc0029fa630) (0xc001118aa0) Stream removed, broadcasting: 5
Jan 31 22:13:03.109: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:13:03.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6945" for this suite.

• [SLOW TEST:37.112 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2558,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:13:03.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 22:13:03.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3531'
Jan 31 22:13:03.370: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 22:13:03.370: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Jan 31 22:13:03.388: INFO: scanned /root for discovery docs: 
Jan 31 22:13:03.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3531'
Jan 31 22:13:28.879: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 31 22:13:28.879: INFO: stdout: "Created e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659\nScaling up e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan 31 22:13:28.879: INFO: stdout: "Created e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659\nScaling up e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan 31 22:13:28.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-3531'
Jan 31 22:13:29.068: INFO: stderr: ""
Jan 31 22:13:29.068: INFO: stdout: "e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659-57nlg "
Jan 31 22:13:29.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659-57nlg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3531'
Jan 31 22:13:29.187: INFO: stderr: ""
Jan 31 22:13:29.187: INFO: stdout: "true"
Jan 31 22:13:29.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659-57nlg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3531'
Jan 31 22:13:29.321: INFO: stderr: ""
Jan 31 22:13:29.321: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan 31 22:13:29.321: INFO: e2e-test-httpd-rc-70c8bc59c205638f58c45866d0a35659-57nlg is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Jan 31 22:13:29.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3531'
Jan 31 22:13:29.468: INFO: stderr: ""
Jan 31 22:13:29.468: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:13:29.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3531" for this suite.

• [SLOW TEST:26.461 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":155,"skipped":2560,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:13:29.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-b0d4ad2d-4435-49e6-b8f0-68e12cfd78f2
STEP: Creating secret with name s-test-opt-upd-90af55ad-ef55-4fea-8f0c-884a45a0bf9f
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b0d4ad2d-4435-49e6-b8f0-68e12cfd78f2
STEP: Updating secret s-test-opt-upd-90af55ad-ef55-4fea-8f0c-884a45a0bf9f
STEP: Creating secret with name s-test-opt-create-0c1cba30-6d4b-44a0-8819-c5d72b98dc0a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:13:44.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8805" for this suite.

• [SLOW TEST:14.452 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2569,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:13:44.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0131 22:14:25.721471       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 22:14:25.721: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:14:25.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7121" for this suite.

• [SLOW TEST:41.683 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":157,"skipped":2645,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:14:25.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 31 22:14:39.945: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 31 22:15:06.897: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:15:06.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-251" for this suite.

• [SLOW TEST:41.183 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":158,"skipped":2664,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:15:06.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 31 22:15:15.629: INFO: Successfully updated pod "labelsupdate90e8258e-760b-494c-aee7-b7f8c28ea80d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:15:18.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2350" for this suite.

• [SLOW TEST:11.752 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2687,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:15:18.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:15:19.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:15:21.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:15:23.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:15:25.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:15:27.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105719, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:15:30.459: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 31 22:15:30.501: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:15:30.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4366" for this suite.
STEP: Destroying namespace "webhook-4366-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.226 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":160,"skipped":2744,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:15:30.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 31 22:15:30.991: INFO: Waiting up to 5m0s for pod "pod-f1091b25-ad23-487c-bdc6-71f79197083b" in namespace "emptydir-1823" to be "success or failure"
Jan 31 22:15:31.005: INFO: Pod "pod-f1091b25-ad23-487c-bdc6-71f79197083b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.946229ms
Jan 31 22:15:33.012: INFO: Pod "pod-f1091b25-ad23-487c-bdc6-71f79197083b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021272677s
Jan 31 22:15:35.018: INFO: Pod "pod-f1091b25-ad23-487c-bdc6-71f79197083b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027059918s
Jan 31 22:15:37.023: INFO: Pod "pod-f1091b25-ad23-487c-bdc6-71f79197083b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032380383s
Jan 31 22:15:39.028: INFO: Pod "pod-f1091b25-ad23-487c-bdc6-71f79197083b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03771929s
Jan 31 22:15:41.035: INFO: Pod "pod-f1091b25-ad23-487c-bdc6-71f79197083b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044495349s
STEP: Saw pod success
Jan 31 22:15:41.035: INFO: Pod "pod-f1091b25-ad23-487c-bdc6-71f79197083b" satisfied condition "success or failure"
Jan 31 22:15:41.038: INFO: Trying to get logs from node jerma-node pod pod-f1091b25-ad23-487c-bdc6-71f79197083b container test-container: 
STEP: delete the pod
Jan 31 22:15:41.064: INFO: Waiting for pod pod-f1091b25-ad23-487c-bdc6-71f79197083b to disappear
Jan 31 22:15:41.121: INFO: Pod pod-f1091b25-ad23-487c-bdc6-71f79197083b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:15:41.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1823" for this suite.

• [SLOW TEST:10.235 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2751,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:15:41.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:15:41.350: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 31 22:15:46.385: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 22:15:48.394: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 31 22:15:50.400: INFO: Creating deployment "test-rollover-deployment"
Jan 31 22:15:50.416: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 31 22:15:52.426: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 31 22:15:52.437: INFO: Ensure that both replica sets have 1 created replica
Jan 31 22:15:52.443: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 31 22:15:52.450: INFO: Updating deployment test-rollover-deployment
Jan 31 22:15:52.450: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 31 22:15:55.011: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 31 22:15:55.034: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 31 22:15:55.046: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 22:15:55.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105753, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:15:57.064: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 22:15:57.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105753, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:15:59.062: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 22:15:59.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105753, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:16:01.060: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 22:16:01.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105759, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:16:03.063: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 22:16:03.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105759, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:16:05.061: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 22:16:05.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105759, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:16:07.063: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 22:16:07.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105759, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:16:09.063: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 22:16:09.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105759, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:16:11.064: INFO: 
Jan 31 22:16:11.064: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 31 22:16:11.076: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-7509 /apis/apps/v1/namespaces/deployment-7509/deployments/test-rollover-deployment c26d92a6-bb33-4336-ac0e-d09f3003585b 5609877 2 2020-01-31 22:15:50 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d3ac98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-31 22:15:50 +0000 UTC,LastTransitionTime:2020-01-31 22:15:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-31 22:16:10 +0000 UTC,LastTransitionTime:2020-01-31 22:15:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 31 22:16:11.079: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-7509 /apis/apps/v1/namespaces/deployment-7509/replicasets/test-rollover-deployment-574d6dfbff 75620c3f-f784-48e6-a61f-f13f502396ff 5609862 2 2020-01-31 22:15:52 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c26d92a6-bb33-4336-ac0e-d09f3003585b 0xc003d3b117 0xc003d3b118}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d3b1a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 31 22:16:11.079: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 31 22:16:11.079: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-7509 /apis/apps/v1/namespaces/deployment-7509/replicasets/test-rollover-controller 7b210697-9f1c-45ba-b995-e311535a4682 5609876 2 2020-01-31 22:15:41 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c26d92a6-bb33-4336-ac0e-d09f3003585b 0xc003d3b047 0xc003d3b048}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003d3b0a8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 31 22:16:11.079: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-7509 /apis/apps/v1/namespaces/deployment-7509/replicasets/test-rollover-deployment-f6c94f66c 40e1c7fd-8abc-4822-9362-1aed7c646c0f 5609812 2 2020-01-31 22:15:50 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c26d92a6-bb33-4336-ac0e-d09f3003585b 0xc003d3b210 0xc003d3b211}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d3b2a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 31 22:16:11.081: INFO: Pod "test-rollover-deployment-574d6dfbff-vpnvc" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-vpnvc test-rollover-deployment-574d6dfbff- deployment-7509 /api/v1/namespaces/deployment-7509/pods/test-rollover-deployment-574d6dfbff-vpnvc 52d7f5fc-fa19-4f68-a0cb-49c13f61c5d2 5609838 0 2020-01-31 22:15:52 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 75620c3f-f784-48e6-a61f-f13f502396ff 0xc003d3b7e7 0xc003d3b7e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n87hk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n87hk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n87hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:15:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:15:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:15:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:15:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-31 22:15:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:15:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://edaf2c907e78619ee328b7ddbac9630359024f9031cacf1f76a4b5a59b1f70af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:16:11.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7509" for this suite.

• [SLOW TEST:29.961 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":162,"skipped":2755,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:16:11.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:16:11.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d" in namespace "projected-9863" to be "success or failure"
Jan 31 22:16:11.286: INFO: Pod "downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.218512ms
Jan 31 22:16:13.293: INFO: Pod "downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022144122s
Jan 31 22:16:15.302: INFO: Pod "downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031695564s
Jan 31 22:16:17.310: INFO: Pod "downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039032351s
Jan 31 22:16:19.316: INFO: Pod "downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046005184s
Jan 31 22:16:21.324: INFO: Pod "downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053818125s
STEP: Saw pod success
Jan 31 22:16:21.324: INFO: Pod "downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d" satisfied condition "success or failure"
Jan 31 22:16:21.328: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d container client-container: 
STEP: delete the pod
Jan 31 22:16:21.434: INFO: Waiting for pod downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d to disappear
Jan 31 22:16:21.438: INFO: Pod downwardapi-volume-784b8e3d-b418-4675-8536-61ca6a9e188d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:16:21.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9863" for this suite.

• [SLOW TEST:10.362 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2765,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:16:21.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:16:22.726: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:16:24.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:16:26.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:16:28.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105782, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:16:31.827: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:16:31.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4859" for this suite.
STEP: Destroying namespace "webhook-4859-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.798 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":164,"skipped":2771,"failed":0}
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:16:32.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:16:42.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9469" for this suite.

• [SLOW TEST:10.152 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2777,"failed":0}
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:16:42.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jan 31 22:16:42.476: INFO: Waiting up to 5m0s for pod "client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053" in namespace "containers-4858" to be "success or failure"
Jan 31 22:16:42.588: INFO: Pod "client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053": Phase="Pending", Reason="", readiness=false. Elapsed: 111.349879ms
Jan 31 22:16:44.604: INFO: Pod "client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127218069s
Jan 31 22:16:46.611: INFO: Pod "client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134945141s
Jan 31 22:16:48.619: INFO: Pod "client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142313127s
Jan 31 22:16:50.625: INFO: Pod "client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148793994s
Jan 31 22:16:52.638: INFO: Pod "client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.161437369s
STEP: Saw pod success
Jan 31 22:16:52.638: INFO: Pod "client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053" satisfied condition "success or failure"
Jan 31 22:16:52.643: INFO: Trying to get logs from node jerma-node pod client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053 container test-container: 
STEP: delete the pod
Jan 31 22:16:52.706: INFO: Waiting for pod client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053 to disappear
Jan 31 22:16:52.718: INFO: Pod client-containers-dae91e8e-0a2f-4276-9dd2-36241dd90053 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:16:52.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4858" for this suite.

• [SLOW TEST:10.353 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2777,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:16:52.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4126
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-4126
Jan 31 22:16:52.970: INFO: Found 0 stateful pods, waiting for 1
Jan 31 22:17:02.980: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 31 22:17:03.056: INFO: Deleting all statefulset in ns statefulset-4126
Jan 31 22:17:03.072: INFO: Scaling statefulset ss to 0
Jan 31 22:17:23.158: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 22:17:23.163: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:17:23.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4126" for this suite.

• [SLOW TEST:30.451 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":167,"skipped":2778,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:17:23.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jan 31 22:17:23.395: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7825" to be "success or failure"
Jan 31 22:17:23.407: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.650083ms
Jan 31 22:17:25.413: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017367134s
Jan 31 22:17:27.422: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026422183s
Jan 31 22:17:29.445: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050249028s
Jan 31 22:17:31.455: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060277398s
Jan 31 22:17:33.465: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069657463s
STEP: Saw pod success
Jan 31 22:17:33.465: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 31 22:17:33.470: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 31 22:17:33.531: INFO: Waiting for pod pod-host-path-test to disappear
Jan 31 22:17:33.539: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:17:33.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7825" for this suite.

• [SLOW TEST:10.372 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2790,"failed":0}
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:17:33.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-bed8268d-0677-4a77-bdf5-e2608add080e
STEP: Creating a pod to test consume secrets
Jan 31 22:17:33.716: INFO: Waiting up to 5m0s for pod "pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7" in namespace "secrets-4929" to be "success or failure"
Jan 31 22:17:33.728: INFO: Pod "pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.78321ms
Jan 31 22:17:35.734: INFO: Pod "pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018032139s
Jan 31 22:17:37.742: INFO: Pod "pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02545708s
Jan 31 22:17:39.750: INFO: Pod "pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034131566s
Jan 31 22:17:41.759: INFO: Pod "pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043180467s
STEP: Saw pod success
Jan 31 22:17:41.760: INFO: Pod "pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7" satisfied condition "success or failure"
Jan 31 22:17:41.765: INFO: Trying to get logs from node jerma-node pod pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7 container secret-volume-test: 
STEP: delete the pod
Jan 31 22:17:41.848: INFO: Waiting for pod pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7 to disappear
Jan 31 22:17:41.858: INFO: Pod pod-secrets-cade7cc2-eea7-4293-aa00-435aa508f4b7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:17:41.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4929" for this suite.

• [SLOW TEST:8.301 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2790,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:17:41.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 31 22:17:48.557: INFO: Successfully updated pod "pod-update-activedeadlineseconds-15ae3606-5d59-42b3-86fe-f0894e8f8e2a"
Jan 31 22:17:48.557: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-15ae3606-5d59-42b3-86fe-f0894e8f8e2a" in namespace "pods-3091" to be "terminated due to deadline exceeded"
Jan 31 22:17:48.585: INFO: Pod "pod-update-activedeadlineseconds-15ae3606-5d59-42b3-86fe-f0894e8f8e2a": Phase="Running", Reason="", readiness=true. Elapsed: 27.982795ms
Jan 31 22:17:50.596: INFO: Pod "pod-update-activedeadlineseconds-15ae3606-5d59-42b3-86fe-f0894e8f8e2a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.038512681s
Jan 31 22:17:50.596: INFO: Pod "pod-update-activedeadlineseconds-15ae3606-5d59-42b3-86fe-f0894e8f8e2a" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:17:50.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3091" for this suite.

• [SLOW TEST:8.729 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:17:50.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:18:07.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-408" for this suite.

• [SLOW TEST:16.630 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":171,"skipped":2896,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:18:07.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:18:07.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3194" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":172,"skipped":2904,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:18:07.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Jan 31 22:18:07.880: INFO: Waiting up to 5m0s for pod "client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887" in namespace "containers-3617" to be "success or failure"
Jan 31 22:18:07.890: INFO: Pod "client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887": Phase="Pending", Reason="", readiness=false. Elapsed: 9.89136ms
Jan 31 22:18:09.897: INFO: Pod "client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016992303s
Jan 31 22:18:11.908: INFO: Pod "client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027545033s
Jan 31 22:18:13.919: INFO: Pod "client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038972626s
Jan 31 22:18:15.929: INFO: Pod "client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048497323s
STEP: Saw pod success
Jan 31 22:18:15.929: INFO: Pod "client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887" satisfied condition "success or failure"
Jan 31 22:18:15.932: INFO: Trying to get logs from node jerma-node pod client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887 container test-container: 
STEP: delete the pod
Jan 31 22:18:16.016: INFO: Waiting for pod client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887 to disappear
Jan 31 22:18:16.025: INFO: Pod client-containers-d174e988-dca3-469b-8f5c-c5c0f6528887 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:18:16.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3617" for this suite.

• [SLOW TEST:8.293 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2908,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:18:16.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:18:16.083: INFO: Creating deployment "webserver-deployment"
Jan 31 22:18:16.088: INFO: Waiting for observed generation 1
Jan 31 22:18:18.759: INFO: Waiting for all required pods to come up
Jan 31 22:18:18.857: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 31 22:18:41.037: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 31 22:18:41.042: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 31 22:18:41.053: INFO: Updating deployment webserver-deployment
Jan 31 22:18:41.054: INFO: Waiting for observed generation 2
Jan 31 22:18:43.337: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 31 22:18:44.085: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 31 22:18:44.668: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 31 22:18:45.209: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 31 22:18:45.209: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 31 22:18:45.213: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 31 22:18:45.219: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 31 22:18:45.219: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 31 22:18:45.230: INFO: Updating deployment webserver-deployment
Jan 31 22:18:45.230: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 31 22:18:45.376: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 31 22:18:49.356: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 31 22:18:51.601: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5286 /apis/apps/v1/namespaces/deployment-5286/deployments/webserver-deployment baedb286-6d5d-4cad-8e3e-5602ab4876cd 5610876 3 2020-01-31 22:18:16 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e46518  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-31 22:18:43 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-31 22:18:45 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 31 22:18:53.034: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5286 /apis/apps/v1/namespaces/deployment-5286/replicasets/webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 5610887 3 2020-01-31 22:18:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment baedb286-6d5d-4cad-8e3e-5602ab4876cd 0xc003e46a67 0xc003e46a68}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e46ad8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 31 22:18:53.034: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 31 22:18:53.034: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5286 /apis/apps/v1/namespaces/deployment-5286/replicasets/webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 5610854 3 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment baedb286-6d5d-4cad-8e3e-5602ab4876cd 0xc003e46997 0xc003e46998}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e46a08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 31 22:18:53.096: INFO: Pod "webserver-deployment-595b5b9587-4ngw6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4ngw6 webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-4ngw6 a3f88357-2bb9-4a36-b6bd-ba1c380a9f92 5610891 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1b4c7 0xc003e1b4c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 22:18:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.096: INFO: Pod "webserver-deployment-595b5b9587-5jqrv" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5jqrv webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-5jqrv 014f6921-7406-4959-9ad2-78fd90c0211b 5610714 0 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1b677 0xc003e1b678}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-31 22:18:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:18:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d13060926a6c5e80ecdfcee3013d7f9e4162c4325eeb5b1e604dba3bded6e113,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.097: INFO: Pod "webserver-deployment-595b5b9587-9fr85" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9fr85 webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-9fr85 6d579030-5cf9-4458-9761-2aaa2be7d634 5610851 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1b800 0xc003e1b801}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.097: INFO: Pod "webserver-deployment-595b5b9587-9srtf" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9srtf webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-9srtf 0f5d454e-ff4e-4f41-bb94-9b13940e8d2f 5610890 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1b917 0xc003e1b918}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 22:18:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.097: INFO: Pod "webserver-deployment-595b5b9587-djt5c" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-djt5c webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-djt5c 046045b1-8c20-4715-88a9-76776ee85624 5610850 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1ba97 0xc003e1ba98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.097: INFO: Pod "webserver-deployment-595b5b9587-fw9qw" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fw9qw webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-fw9qw 28a221db-f40b-4898-b08d-6af4a3eb1215 5610718 0 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1bba7 0xc003e1bba8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-31 22:18:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:18:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://55d29451662132158d7894d40d0a3e4f325000b9d3dfd6ccd74d0289318c9936,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.098: INFO: Pod "webserver-deployment-595b5b9587-h7tmm" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-h7tmm webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-h7tmm d429bb53-6f93-47ed-8c29-78a6b0b00469 5610695 0 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1bd20 0xc003e1bd21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-31 22:18:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:18:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://164978e6b2a57905ab8b6fb5f969929cf940d0169a16c34723bf7b7764ff16bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.098: INFO: Pod "webserver-deployment-595b5b9587-hbbvt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hbbvt webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-hbbvt ffa09402-a4c5-4b2f-a539-b9142bfefa9a 5610728 0 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1be90 0xc003e1be91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-31 22:18:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:18:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c383e3b15b0399931af1167e4c2c964508abaec9325085ba383873aae84cb47f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.098: INFO: Pod "webserver-deployment-595b5b9587-jbpd4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jbpd4 webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-jbpd4 ea3b58de-ce26-4317-82e7-747d2dedbcb1 5610857 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003e1bff0 0xc003e1bff1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 22:18:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.098: INFO: Pod "webserver-deployment-595b5b9587-k5flx" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-k5flx webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-k5flx a54a3ba8-1d5d-4e9d-9a09-a6d8b12801ac 5610704 0 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8a147 0xc003b8a148}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-31 22:18:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:18:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5789e22dd1b9db78fb36d134baf4851e526a36909fa74b5027ba1bd10c6c6e62,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.099: INFO: Pod "webserver-deployment-595b5b9587-lqcdq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lqcdq webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-lqcdq 42f29388-2f26-4c00-b684-24e020857553 5610883 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8a2c0 0xc003b8a2c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 22:18:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.099: INFO: Pod "webserver-deployment-595b5b9587-lqrk4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lqrk4 webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-lqrk4 8d56b53d-a623-4eca-a9ec-ead8d631bb9e 5610731 0 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8a417 0xc003b8a418}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-31 22:18:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:18:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://fe7495947900fbecb85ad5213470e3c571df48bc62a2337c661b180ab176dd11,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.099: INFO: Pod "webserver-deployment-595b5b9587-mq8rn" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mq8rn webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-mq8rn b0a2786f-c176-41f3-8155-ff93671ba59e 5610734 0 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8a580 0xc003b8a581}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-31 22:18:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:18:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://9eff55bd2c179c6dd41282b34ef8dcb1f7cf4a71d60fee1d2ec2dd334e91c5c6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.099: INFO: Pod "webserver-deployment-595b5b9587-rdc7m" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rdc7m webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-rdc7m eb0b75ae-1178-4fbb-8dc6-5f6b13cd7d59 5610737 0 2020-01-31 22:18:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8a6e0 0xc003b8a6e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-31 22:18:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 22:18:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://22b76c378e32884cd71bf4e8f6a747d611f8b843a15348cc350b3a8973476e43,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.100: INFO: Pod "webserver-deployment-595b5b9587-sxkml" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sxkml webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-sxkml b909afdc-85f1-4136-ae80-fff6355b6286 5610896 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8a840 0xc003b8a841}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 22:18:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.100: INFO: Pod "webserver-deployment-595b5b9587-twj2z" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-twj2z webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-twj2z 45d98ce9-8f08-4a45-a2dd-e222cbf9e5b6 5610837 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8a9c7 0xc003b8a9c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.100: INFO: Pod "webserver-deployment-595b5b9587-wm7pg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wm7pg webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-wm7pg 96b2cfb5-bcbe-4952-8e4b-96de26cc6b74 5610847 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8aad7 0xc003b8aad8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.100: INFO: Pod "webserver-deployment-595b5b9587-wqfb9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wqfb9 webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-wqfb9 fd91ffa2-7350-4975-9cd7-da315614f753 5610882 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8abf7 0xc003b8abf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 22:18:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.101: INFO: Pod "webserver-deployment-595b5b9587-xnc4k" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xnc4k webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-xnc4k 990dc16d-1db3-4d64-8c7a-797e635ad7cb 5610852 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8ad47 0xc003b8ad48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.101: INFO: Pod "webserver-deployment-595b5b9587-znz4g" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-znz4g webserver-deployment-595b5b9587- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-595b5b9587-znz4g aec4d572-48c8-4996-a01d-d9a8feb4803d 5610848 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f7b5ca22-6457-4e83-b0e1-c419e3dac042 0xc003b8ae67 0xc003b8ae68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.101: INFO: Pod "webserver-deployment-c7997dcc8-6km2f" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6km2f webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-6km2f d83d8e76-0ad5-488a-abf9-2b98a9b30a01 5610785 0 2020-01-31 22:18:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8af77 0xc003b8af78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 22:18:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.102: INFO: Pod "webserver-deployment-c7997dcc8-6rl9t" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6rl9t webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-6rl9t 91fe2f55-8f04-4e26-ac71-b9a9fb85e526 5610840 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8b0f7 0xc003b8b0f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.102: INFO: Pod "webserver-deployment-c7997dcc8-87f5x" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-87f5x webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-87f5x a9bebf22-759e-4d9b-8717-2dc43823bfef 5610773 0 2020-01-31 22:18:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8b227 0xc003b8b228}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 22:18:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.102: INFO: Pod "webserver-deployment-c7997dcc8-97br8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-97br8 webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-97br8 a2a99ab9-e53a-4393-a656-7a6c14782179 5610855 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8b397 0xc003b8b398}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 22:18:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.103: INFO: Pod "webserver-deployment-c7997dcc8-9jhhr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9jhhr webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-9jhhr b4d23fea-c44f-4f8b-b7ea-0fd28bd09c5f 5610846 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8b507 0xc003b8b508}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.104: INFO: Pod "webserver-deployment-c7997dcc8-d4rvp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d4rvp webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-d4rvp 8dcc63dc-aea5-40e2-ab74-2708f30d8773 5610801 0 2020-01-31 22:18:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8b627 0xc003b8b628}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 22:18:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.104: INFO: Pod "webserver-deployment-c7997dcc8-hpcv6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hpcv6 webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-hpcv6 f6d075a3-870b-4828-8b5b-604c237b796f 5610766 0 2020-01-31 22:18:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8b7a7 0xc003b8b7a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 22:18:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.104: INFO: Pod "webserver-deployment-c7997dcc8-k2jgz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k2jgz webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-k2jgz 60113843-8474-4231-9dac-02b5ecce20f0 5610844 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8b927 0xc003b8b928}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.104: INFO: Pod "webserver-deployment-c7997dcc8-ktrg5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ktrg5 webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-ktrg5 baff8e1e-8589-43b3-8944-0cf0f63743e2 5610875 0 2020-01-31 22:18:47 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8ba57 0xc003b8ba58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.104: INFO: Pod "webserver-deployment-c7997dcc8-kx8pq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kx8pq webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-kx8pq 3ed8bd8c-d402-48f8-ade9-6716b850ec79 5610843 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8bb77 0xc003b8bb78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.105: INFO: Pod "webserver-deployment-c7997dcc8-lnc59" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lnc59 webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-lnc59 42cc6263-b295-44a2-aa48-f2d3dbb33458 5610797 0 2020-01-31 22:18:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8bca7 0xc003b8bca8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 22:18:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.105: INFO: Pod "webserver-deployment-c7997dcc8-mb28r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mb28r webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-mb28r 26c6a24a-02f1-48e6-ab2a-e9552405af8a 5610845 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8be17 0xc003b8be18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 22:18:53.105: INFO: Pod "webserver-deployment-c7997dcc8-xbdm7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xbdm7 webserver-deployment-c7997dcc8- deployment-5286 /api/v1/namespaces/deployment-5286/pods/webserver-deployment-c7997dcc8-xbdm7 61277816-13d1-4ce9-88eb-36adcd00429f 5610895 0 2020-01-31 22:18:45 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ea4b07c0-50fe-4c64-98b8-d368dd904d87 0xc003b8bf47 0xc003b8bf48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5jmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5jmr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5jmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 22:18:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 22:18:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:18:53.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5286" for this suite.

• [SLOW TEST:40.259 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":174,"skipped":2918,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:18:56.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-57cc1b1b-ec74-4419-a61a-5f2136b00341
STEP: Creating a pod to test consume secrets
Jan 31 22:19:03.727: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33" in namespace "projected-4892" to be "success or failure"
Jan 31 22:19:03.830: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 103.296741ms
Jan 31 22:19:06.154: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.426802204s
Jan 31 22:19:10.303: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576118767s
Jan 31 22:19:12.634: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 8.906843873s
Jan 31 22:19:14.647: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 10.919606758s
Jan 31 22:19:16.699: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 12.972293026s
Jan 31 22:19:19.365: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 15.637881046s
Jan 31 22:19:21.711: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 17.984014023s
Jan 31 22:19:23.943: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 20.215641675s
Jan 31 22:19:26.243: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 22.515885957s
Jan 31 22:19:28.418: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 24.691142764s
Jan 31 22:19:30.735: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 27.007610236s
Jan 31 22:19:33.006: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 29.279221528s
Jan 31 22:19:35.510: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 31.782463647s
Jan 31 22:19:37.937: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 34.210392835s
Jan 31 22:19:40.122: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 36.394957997s
Jan 31 22:19:42.130: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Pending", Reason="", readiness=false. Elapsed: 38.403251102s
Jan 31 22:19:44.136: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.40910862s
STEP: Saw pod success
Jan 31 22:19:44.136: INFO: Pod "pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33" satisfied condition "success or failure"
Jan 31 22:19:44.140: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33 container secret-volume-test: 
STEP: delete the pod
Jan 31 22:19:44.185: INFO: Waiting for pod pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33 to disappear
Jan 31 22:19:44.189: INFO: Pod pod-projected-secrets-3054092e-4111-4c6f-8185-eb2c94f2af33 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:19:44.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4892" for this suite.

• [SLOW TEST:47.907 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2941,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:19:44.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 31 22:19:44.371: INFO: PodSpec: initContainers in spec.initContainers
Jan 31 22:20:45.321: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-89185e81-f27e-425e-bcbe-e95708c5c9ab", GenerateName:"", Namespace:"init-container-16", SelfLink:"/api/v1/namespaces/init-container-16/pods/pod-init-89185e81-f27e-425e-bcbe-e95708c5c9ab", UID:"9fcdb76c-5e6f-4125-b671-30b3c7190aeb", ResourceVersion:"5611347", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716105984, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"370935221"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vhg4k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002163400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vhg4k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vhg4k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vhg4k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003e1b6a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004fc87e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003e1b730)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003e1b750)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003e1b758), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003e1b75c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105985, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105985, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105985, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716105984, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0034fb280), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009856c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000985730)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://003cb41b6e15f68a8dfbe48a51aaa34f11e315ea626ff95198138df0839760c5", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034fb2c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0034fb2a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003e1b7df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:20:45.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-16" for this suite.

• [SLOW TEST:61.230 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":176,"skipped":2971,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:20:45.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:20:45.599: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:20:46.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8961" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":177,"skipped":2985,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:20:46.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-bda7eee2-88a0-438a-a20f-247c6819817f in namespace container-probe-7774
Jan 31 22:20:54.356: INFO: Started pod busybox-bda7eee2-88a0-438a-a20f-247c6819817f in namespace container-probe-7774
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 22:20:54.360: INFO: Initial restart count of pod busybox-bda7eee2-88a0-438a-a20f-247c6819817f is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:24:55.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7774" for this suite.

• [SLOW TEST:249.443 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2992,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:24:55.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-6f74623d-a3e0-48e1-ab2f-a7ada387c6dd
STEP: Creating secret with name s-test-opt-upd-95a317b0-9397-432a-83eb-d70a07564f4d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-6f74623d-a3e0-48e1-ab2f-a7ada387c6dd
STEP: Updating secret s-test-opt-upd-95a317b0-9397-432a-83eb-d70a07564f4d
STEP: Creating secret with name s-test-opt-create-df335211-cc77-4444-8936-a48810e2de2f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:26:15.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2724" for this suite.

• [SLOW TEST:79.372 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2994,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:26:15.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:26:24.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4476" for this suite.

• [SLOW TEST:9.248 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":180,"skipped":3019,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:26:24.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-3227
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 22:26:24.408: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 22:26:52.709: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3227 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:26:52.710: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:26:52.758187       8 log.go:172] (0xc0026d2160) (0xc000a737c0) Create stream
I0131 22:26:52.758630       8 log.go:172] (0xc0026d2160) (0xc000a737c0) Stream added, broadcasting: 1
I0131 22:26:52.763645       8 log.go:172] (0xc0026d2160) Reply frame received for 1
I0131 22:26:52.763696       8 log.go:172] (0xc0026d2160) (0xc000a73900) Create stream
I0131 22:26:52.763708       8 log.go:172] (0xc0026d2160) (0xc000a73900) Stream added, broadcasting: 3
I0131 22:26:52.764608       8 log.go:172] (0xc0026d2160) Reply frame received for 3
I0131 22:26:52.764636       8 log.go:172] (0xc0026d2160) (0xc00190ebe0) Create stream
I0131 22:26:52.764648       8 log.go:172] (0xc0026d2160) (0xc00190ebe0) Stream added, broadcasting: 5
I0131 22:26:52.765540       8 log.go:172] (0xc0026d2160) Reply frame received for 5
I0131 22:26:52.867063       8 log.go:172] (0xc0026d2160) Data frame received for 3
I0131 22:26:52.867212       8 log.go:172] (0xc000a73900) (3) Data frame handling
I0131 22:26:52.867233       8 log.go:172] (0xc000a73900) (3) Data frame sent
I0131 22:26:52.956442       8 log.go:172] (0xc0026d2160) (0xc000a73900) Stream removed, broadcasting: 3
I0131 22:26:52.956776       8 log.go:172] (0xc0026d2160) Data frame received for 1
I0131 22:26:52.956791       8 log.go:172] (0xc000a737c0) (1) Data frame handling
I0131 22:26:52.957016       8 log.go:172] (0xc000a737c0) (1) Data frame sent
I0131 22:26:52.957029       8 log.go:172] (0xc0026d2160) (0xc00190ebe0) Stream removed, broadcasting: 5
I0131 22:26:52.957052       8 log.go:172] (0xc0026d2160) (0xc000a737c0) Stream removed, broadcasting: 1
I0131 22:26:52.957077       8 log.go:172] (0xc0026d2160) Go away received
I0131 22:26:52.957667       8 log.go:172] (0xc0026d2160) (0xc000a737c0) Stream removed, broadcasting: 1
I0131 22:26:52.957686       8 log.go:172] (0xc0026d2160) (0xc000a73900) Stream removed, broadcasting: 3
I0131 22:26:52.957697       8 log.go:172] (0xc0026d2160) (0xc00190ebe0) Stream removed, broadcasting: 5
Jan 31 22:26:52.957: INFO: Waiting for responses: map[]
Jan 31 22:26:52.966: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3227 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:26:52.966: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:26:53.017541       8 log.go:172] (0xc002c72370) (0xc00190f7c0) Create stream
I0131 22:26:53.017685       8 log.go:172] (0xc002c72370) (0xc00190f7c0) Stream added, broadcasting: 1
I0131 22:26:53.020990       8 log.go:172] (0xc002c72370) Reply frame received for 1
I0131 22:26:53.021021       8 log.go:172] (0xc002c72370) (0xc000a73c20) Create stream
I0131 22:26:53.021032       8 log.go:172] (0xc002c72370) (0xc000a73c20) Stream added, broadcasting: 3
I0131 22:26:53.021984       8 log.go:172] (0xc002c72370) Reply frame received for 3
I0131 22:26:53.022014       8 log.go:172] (0xc002c72370) (0xc000436140) Create stream
I0131 22:26:53.022024       8 log.go:172] (0xc002c72370) (0xc000436140) Stream added, broadcasting: 5
I0131 22:26:53.023128       8 log.go:172] (0xc002c72370) Reply frame received for 5
I0131 22:26:53.116334       8 log.go:172] (0xc002c72370) Data frame received for 3
I0131 22:26:53.116387       8 log.go:172] (0xc000a73c20) (3) Data frame handling
I0131 22:26:53.116427       8 log.go:172] (0xc000a73c20) (3) Data frame sent
I0131 22:26:53.185082       8 log.go:172] (0xc002c72370) (0xc000a73c20) Stream removed, broadcasting: 3
I0131 22:26:53.185490       8 log.go:172] (0xc002c72370) (0xc000436140) Stream removed, broadcasting: 5
I0131 22:26:53.185839       8 log.go:172] (0xc002c72370) Data frame received for 1
I0131 22:26:53.186155       8 log.go:172] (0xc00190f7c0) (1) Data frame handling
I0131 22:26:53.186192       8 log.go:172] (0xc00190f7c0) (1) Data frame sent
I0131 22:26:53.186216       8 log.go:172] (0xc002c72370) (0xc00190f7c0) Stream removed, broadcasting: 1
I0131 22:26:53.186267       8 log.go:172] (0xc002c72370) Go away received
I0131 22:26:53.186697       8 log.go:172] (0xc002c72370) (0xc00190f7c0) Stream removed, broadcasting: 1
I0131 22:26:53.186727       8 log.go:172] (0xc002c72370) (0xc000a73c20) Stream removed, broadcasting: 3
I0131 22:26:53.186735       8 log.go:172] (0xc002c72370) (0xc000436140) Stream removed, broadcasting: 5
Jan 31 22:26:53.186: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:26:53.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3227" for this suite.

• [SLOW TEST:28.853 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3028,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:26:53.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:27:32.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2894" for this suite.
STEP: Destroying namespace "nsdeletetest-2940" for this suite.
Jan 31 22:27:32.605: INFO: Namespace nsdeletetest-2940 was already deleted
STEP: Destroying namespace "nsdeletetest-3272" for this suite.

• [SLOW TEST:39.457 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":182,"skipped":3038,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:27:32.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 22:27:40.201: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:27:40.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8736" for this suite.

• [SLOW TEST:7.690 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3093,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:27:40.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:27:40.624: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 31 22:27:40.651: INFO: Number of nodes with available pods: 0
Jan 31 22:27:40.651: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:27:41.666: INFO: Number of nodes with available pods: 0
Jan 31 22:27:41.666: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:27:42.954: INFO: Number of nodes with available pods: 0
Jan 31 22:27:42.954: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:27:43.743: INFO: Number of nodes with available pods: 0
Jan 31 22:27:43.743: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:27:44.673: INFO: Number of nodes with available pods: 0
Jan 31 22:27:44.673: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:27:45.696: INFO: Number of nodes with available pods: 0
Jan 31 22:27:45.696: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:27:47.973: INFO: Number of nodes with available pods: 0
Jan 31 22:27:47.973: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:27:49.044: INFO: Number of nodes with available pods: 1
Jan 31 22:27:49.044: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:27:49.661: INFO: Number of nodes with available pods: 1
Jan 31 22:27:49.661: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:27:50.674: INFO: Number of nodes with available pods: 1
Jan 31 22:27:50.674: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:27:51.667: INFO: Number of nodes with available pods: 2
Jan 31 22:27:51.668: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 31 22:27:51.763: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:51.763: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:52.810: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:52.810: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:53.810: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:53.810: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:54.811: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:54.811: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:55.806: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:55.806: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:55.806: INFO: Pod daemon-set-kmj7l is not available
Jan 31 22:27:56.815: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:56.815: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:56.815: INFO: Pod daemon-set-kmj7l is not available
Jan 31 22:27:57.813: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:57.813: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:57.813: INFO: Pod daemon-set-kmj7l is not available
Jan 31 22:27:58.809: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:58.809: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:58.809: INFO: Pod daemon-set-kmj7l is not available
Jan 31 22:27:59.810: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:59.810: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:27:59.810: INFO: Pod daemon-set-kmj7l is not available
Jan 31 22:28:00.807: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:00.807: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:00.807: INFO: Pod daemon-set-kmj7l is not available
Jan 31 22:28:01.810: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:01.811: INFO: Wrong image for pod: daemon-set-kmj7l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:01.811: INFO: Pod daemon-set-kmj7l is not available
Jan 31 22:28:02.810: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:02.810: INFO: Pod daemon-set-sg22z is not available
Jan 31 22:28:03.813: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:03.813: INFO: Pod daemon-set-sg22z is not available
Jan 31 22:28:04.815: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:04.815: INFO: Pod daemon-set-sg22z is not available
Jan 31 22:28:05.811: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:05.811: INFO: Pod daemon-set-sg22z is not available
Jan 31 22:28:06.811: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:06.811: INFO: Pod daemon-set-sg22z is not available
Jan 31 22:28:07.832: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:07.832: INFO: Pod daemon-set-sg22z is not available
Jan 31 22:28:08.832: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:09.812: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:10.809: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:11.954: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:12.828: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:13.810: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:13.810: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:14.814: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:14.814: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:15.813: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:15.813: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:16.806: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:16.806: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:17.811: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:17.811: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:18.809: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:18.810: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:19.809: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:19.809: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:20.807: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:20.807: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:21.811: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:21.811: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:22.808: INFO: Wrong image for pod: daemon-set-2n9sg. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 22:28:22.808: INFO: Pod daemon-set-2n9sg is not available
Jan 31 22:28:23.812: INFO: Pod daemon-set-gxtzp is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 31 22:28:23.835: INFO: Number of nodes with available pods: 1
Jan 31 22:28:23.835: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:28:24.847: INFO: Number of nodes with available pods: 1
Jan 31 22:28:24.848: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:28:25.844: INFO: Number of nodes with available pods: 1
Jan 31 22:28:25.844: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:28:26.859: INFO: Number of nodes with available pods: 1
Jan 31 22:28:26.859: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:28:27.967: INFO: Number of nodes with available pods: 1
Jan 31 22:28:27.967: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:28:29.125: INFO: Number of nodes with available pods: 1
Jan 31 22:28:29.125: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:28:29.850: INFO: Number of nodes with available pods: 1
Jan 31 22:28:29.850: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:28:30.852: INFO: Number of nodes with available pods: 2
Jan 31 22:28:30.852: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-197, will wait for the garbage collector to delete the pods
Jan 31 22:28:30.953: INFO: Deleting DaemonSet.extensions daemon-set took: 8.268487ms
Jan 31 22:28:31.354: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.480786ms
Jan 31 22:28:42.360: INFO: Number of nodes with available pods: 0
Jan 31 22:28:42.360: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 22:28:42.363: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-197/daemonsets","resourceVersion":"5612784"},"items":null}

Jan 31 22:28:42.366: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-197/pods","resourceVersion":"5612784"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:28:42.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-197" for this suite.

• [SLOW TEST:62.122 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":184,"skipped":3101,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:28:42.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:28:50.797: INFO: Waiting up to 5m0s for pod "client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6" in namespace "pods-4085" to be "success or failure"
Jan 31 22:28:50.809: INFO: Pod "client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.021807ms
Jan 31 22:28:52.815: INFO: Pod "client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018120751s
Jan 31 22:28:54.860: INFO: Pod "client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06313377s
Jan 31 22:28:56.871: INFO: Pod "client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073999714s
Jan 31 22:28:58.879: INFO: Pod "client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081478131s
STEP: Saw pod success
Jan 31 22:28:58.879: INFO: Pod "client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6" satisfied condition "success or failure"
Jan 31 22:28:58.883: INFO: Trying to get logs from node jerma-node pod client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6 container env3cont: 
STEP: delete the pod
Jan 31 22:28:58.947: INFO: Waiting for pod client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6 to disappear
Jan 31 22:28:58.962: INFO: Pod client-envvars-0b1673f9-d133-426c-9944-7d393a5542a6 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:28:58.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4085" for this suite.

• [SLOW TEST:16.501 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3105,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:28:58.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 31 22:28:59.175: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-a 36df1e84-7b7a-4990-a969-ace1eaa6dcc1 5612885 0 2020-01-31 22:28:59 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 22:28:59.176: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-a 36df1e84-7b7a-4990-a969-ace1eaa6dcc1 5612885 0 2020-01-31 22:28:59 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 31 22:29:09.188: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-a 36df1e84-7b7a-4990-a969-ace1eaa6dcc1 5612927 0 2020-01-31 22:28:59 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 31 22:29:09.188: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-a 36df1e84-7b7a-4990-a969-ace1eaa6dcc1 5612927 0 2020-01-31 22:28:59 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 31 22:29:19.207: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-a 36df1e84-7b7a-4990-a969-ace1eaa6dcc1 5612951 0 2020-01-31 22:28:59 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 22:29:19.208: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-a 36df1e84-7b7a-4990-a969-ace1eaa6dcc1 5612951 0 2020-01-31 22:28:59 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 31 22:29:29.220: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-a 36df1e84-7b7a-4990-a969-ace1eaa6dcc1 5612973 0 2020-01-31 22:28:59 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 22:29:29.220: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-a 36df1e84-7b7a-4990-a969-ace1eaa6dcc1 5612973 0 2020-01-31 22:28:59 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 31 22:29:39.236: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-b d843f960-c09b-4d20-a705-67805c38b2cb 5612997 0 2020-01-31 22:29:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 22:29:39.237: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-b d843f960-c09b-4d20-a705-67805c38b2cb 5612997 0 2020-01-31 22:29:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 31 22:29:49.249: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-b d843f960-c09b-4d20-a705-67805c38b2cb 5613021 0 2020-01-31 22:29:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 22:29:49.249: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8088 /api/v1/namespaces/watch-8088/configmaps/e2e-watch-test-configmap-b d843f960-c09b-4d20-a705-67805c38b2cb 5613021 0 2020-01-31 22:29:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:29:59.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8088" for this suite.

• [SLOW TEST:60.295 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":186,"skipped":3118,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:29:59.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:29:59.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5555" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":187,"skipped":3121,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:29:59.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 31 22:29:59.654: INFO: Waiting up to 5m0s for pod "downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b" in namespace "downward-api-3912" to be "success or failure"
Jan 31 22:29:59.684: INFO: Pod "downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.163936ms
Jan 31 22:30:01.692: INFO: Pod "downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037296415s
Jan 31 22:30:03.711: INFO: Pod "downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056192817s
Jan 31 22:30:05.749: INFO: Pod "downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094204745s
Jan 31 22:30:07.758: INFO: Pod "downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103804265s
STEP: Saw pod success
Jan 31 22:30:07.759: INFO: Pod "downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b" satisfied condition "success or failure"
Jan 31 22:30:07.764: INFO: Trying to get logs from node jerma-node pod downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b container dapi-container: 
STEP: delete the pod
Jan 31 22:30:07.867: INFO: Waiting for pod downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b to disappear
Jan 31 22:30:07.905: INFO: Pod downward-api-8afa5a6a-9e00-45b0-ad06-f038691d7b6b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:30:07.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3912" for this suite.

• [SLOW TEST:8.426 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3140,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:30:07.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:30:08.091: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722" in namespace "downward-api-1116" to be "success or failure"
Jan 31 22:30:08.124: INFO: Pod "downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722": Phase="Pending", Reason="", readiness=false. Elapsed: 32.97021ms
Jan 31 22:30:10.135: INFO: Pod "downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043830432s
Jan 31 22:30:12.144: INFO: Pod "downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05258911s
Jan 31 22:30:14.159: INFO: Pod "downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068120788s
Jan 31 22:30:16.165: INFO: Pod "downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073395372s
STEP: Saw pod success
Jan 31 22:30:16.165: INFO: Pod "downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722" satisfied condition "success or failure"
Jan 31 22:30:16.167: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722 container client-container: 
STEP: delete the pod
Jan 31 22:30:16.314: INFO: Waiting for pod downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722 to disappear
Jan 31 22:30:16.331: INFO: Pod downwardapi-volume-8e07cf0e-d912-4eb0-910a-a2cce4af8722 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:30:16.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1116" for this suite.

• [SLOW TEST:8.415 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3151,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:30:16.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:30:21.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8678" for this suite.

• [SLOW TEST:5.176 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":190,"skipped":3166,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:30:21.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-lsrz
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 22:30:21.676: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lsrz" in namespace "subpath-9993" to be "success or failure"
Jan 31 22:30:21.709: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Pending", Reason="", readiness=false. Elapsed: 33.007148ms
Jan 31 22:30:23.721: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045607252s
Jan 31 22:30:25.762: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086734717s
Jan 31 22:30:27.785: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 6.109124374s
Jan 31 22:30:29.823: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 8.147701307s
Jan 31 22:30:31.831: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 10.155562223s
Jan 31 22:30:33.843: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 12.167662262s
Jan 31 22:30:35.854: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 14.178643118s
Jan 31 22:30:37.862: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 16.186667103s
Jan 31 22:30:39.870: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 18.193981185s
Jan 31 22:30:41.882: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 20.206428981s
Jan 31 22:30:43.901: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 22.225502164s
Jan 31 22:30:45.908: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 24.232741569s
Jan 31 22:30:47.918: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Running", Reason="", readiness=true. Elapsed: 26.241816188s
Jan 31 22:30:49.924: INFO: Pod "pod-subpath-test-configmap-lsrz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.248106931s
STEP: Saw pod success
Jan 31 22:30:49.924: INFO: Pod "pod-subpath-test-configmap-lsrz" satisfied condition "success or failure"
Jan 31 22:30:49.928: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-lsrz container test-container-subpath-configmap-lsrz: 
STEP: delete the pod
Jan 31 22:30:49.972: INFO: Waiting for pod pod-subpath-test-configmap-lsrz to disappear
Jan 31 22:30:50.026: INFO: Pod pod-subpath-test-configmap-lsrz no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lsrz
Jan 31 22:30:50.026: INFO: Deleting pod "pod-subpath-test-configmap-lsrz" in namespace "subpath-9993"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:30:50.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9993" for this suite.

• [SLOW TEST:28.514 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":191,"skipped":3219,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:30:50.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 31 22:30:50.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1522'
Jan 31 22:30:52.446: INFO: stderr: ""
Jan 31 22:30:52.446: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 22:30:52.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1522'
Jan 31 22:30:52.738: INFO: stderr: ""
Jan 31 22:30:52.738: INFO: stdout: "update-demo-nautilus-ml9lx update-demo-nautilus-z6r7s "
Jan 31 22:30:52.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:30:52.858: INFO: stderr: ""
Jan 31 22:30:52.859: INFO: stdout: ""
Jan 31 22:30:52.859: INFO: update-demo-nautilus-ml9lx is created but not running
Jan 31 22:30:57.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1522'
Jan 31 22:30:58.751: INFO: stderr: ""
Jan 31 22:30:58.751: INFO: stdout: "update-demo-nautilus-ml9lx update-demo-nautilus-z6r7s "
Jan 31 22:30:58.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:30:59.159: INFO: stderr: ""
Jan 31 22:30:59.159: INFO: stdout: ""
Jan 31 22:30:59.159: INFO: update-demo-nautilus-ml9lx is created but not running
Jan 31 22:31:04.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1522'
Jan 31 22:31:04.284: INFO: stderr: ""
Jan 31 22:31:04.284: INFO: stdout: "update-demo-nautilus-ml9lx update-demo-nautilus-z6r7s "
Jan 31 22:31:04.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:04.388: INFO: stderr: ""
Jan 31 22:31:04.389: INFO: stdout: "true"
Jan 31 22:31:04.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:04.490: INFO: stderr: ""
Jan 31 22:31:04.490: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:31:04.490: INFO: validating pod update-demo-nautilus-ml9lx
Jan 31 22:31:04.507: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:31:04.507: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:31:04.507: INFO: update-demo-nautilus-ml9lx is verified up and running
Jan 31 22:31:04.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z6r7s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:04.605: INFO: stderr: ""
Jan 31 22:31:04.605: INFO: stdout: "true"
Jan 31 22:31:04.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z6r7s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:04.696: INFO: stderr: ""
Jan 31 22:31:04.696: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:31:04.696: INFO: validating pod update-demo-nautilus-z6r7s
Jan 31 22:31:04.724: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:31:04.724: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:31:04.724: INFO: update-demo-nautilus-z6r7s is verified up and running
STEP: scaling down the replication controller
Jan 31 22:31:04.726: INFO: scanned /root for discovery docs: 
Jan 31 22:31:04.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1522'
Jan 31 22:31:05.977: INFO: stderr: ""
Jan 31 22:31:05.977: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 22:31:05.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1522'
Jan 31 22:31:06.141: INFO: stderr: ""
Jan 31 22:31:06.142: INFO: stdout: "update-demo-nautilus-ml9lx update-demo-nautilus-z6r7s "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 31 22:31:11.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1522'
Jan 31 22:31:11.292: INFO: stderr: ""
Jan 31 22:31:11.292: INFO: stdout: "update-demo-nautilus-ml9lx update-demo-nautilus-z6r7s "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 31 22:31:16.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1522'
Jan 31 22:31:16.432: INFO: stderr: ""
Jan 31 22:31:16.432: INFO: stdout: "update-demo-nautilus-ml9lx "
Jan 31 22:31:16.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:16.545: INFO: stderr: ""
Jan 31 22:31:16.545: INFO: stdout: "true"
Jan 31 22:31:16.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:16.685: INFO: stderr: ""
Jan 31 22:31:16.685: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:31:16.685: INFO: validating pod update-demo-nautilus-ml9lx
Jan 31 22:31:16.700: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:31:16.700: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:31:16.701: INFO: update-demo-nautilus-ml9lx is verified up and running
STEP: scaling up the replication controller
Jan 31 22:31:16.705: INFO: scanned /root for discovery docs: 
Jan 31 22:31:16.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1522'
Jan 31 22:31:17.882: INFO: stderr: ""
Jan 31 22:31:17.882: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 22:31:17.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1522'
Jan 31 22:31:18.053: INFO: stderr: ""
Jan 31 22:31:18.053: INFO: stdout: "update-demo-nautilus-ml9lx update-demo-nautilus-nkbjm "
Jan 31 22:31:18.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:18.504: INFO: stderr: ""
Jan 31 22:31:18.504: INFO: stdout: "true"
Jan 31 22:31:18.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:19.132: INFO: stderr: ""
Jan 31 22:31:19.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:31:19.132: INFO: validating pod update-demo-nautilus-ml9lx
Jan 31 22:31:19.154: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:31:19.154: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:31:19.155: INFO: update-demo-nautilus-ml9lx is verified up and running
Jan 31 22:31:19.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkbjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:19.290: INFO: stderr: ""
Jan 31 22:31:19.291: INFO: stdout: ""
Jan 31 22:31:19.291: INFO: update-demo-nautilus-nkbjm is created but not running
Jan 31 22:31:24.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1522'
Jan 31 22:31:24.400: INFO: stderr: ""
Jan 31 22:31:24.400: INFO: stdout: "update-demo-nautilus-ml9lx update-demo-nautilus-nkbjm "
Jan 31 22:31:24.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:24.577: INFO: stderr: ""
Jan 31 22:31:24.577: INFO: stdout: "true"
Jan 31 22:31:24.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ml9lx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:24.716: INFO: stderr: ""
Jan 31 22:31:24.717: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:31:24.717: INFO: validating pod update-demo-nautilus-ml9lx
Jan 31 22:31:24.728: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:31:24.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:31:24.728: INFO: update-demo-nautilus-ml9lx is verified up and running
Jan 31 22:31:24.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkbjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:24.863: INFO: stderr: ""
Jan 31 22:31:24.863: INFO: stdout: "true"
Jan 31 22:31:24.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkbjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1522'
Jan 31 22:31:25.008: INFO: stderr: ""
Jan 31 22:31:25.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:31:25.008: INFO: validating pod update-demo-nautilus-nkbjm
Jan 31 22:31:25.018: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:31:25.018: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:31:25.018: INFO: update-demo-nautilus-nkbjm is verified up and running
STEP: using delete to clean up resources
Jan 31 22:31:25.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1522'
Jan 31 22:31:25.107: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:31:25.107: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 31 22:31:25.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1522'
Jan 31 22:31:25.253: INFO: stderr: "No resources found in kubectl-1522 namespace.\n"
Jan 31 22:31:25.253: INFO: stdout: ""
Jan 31 22:31:25.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1522 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 22:31:25.352: INFO: stderr: ""
Jan 31 22:31:25.352: INFO: stdout: "update-demo-nautilus-ml9lx\nupdate-demo-nautilus-nkbjm\n"
Jan 31 22:31:25.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1522'
Jan 31 22:31:26.034: INFO: stderr: "No resources found in kubectl-1522 namespace.\n"
Jan 31 22:31:26.034: INFO: stdout: ""
Jan 31 22:31:26.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1522 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 22:31:26.134: INFO: stderr: ""
Jan 31 22:31:26.134: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:31:26.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1522" for this suite.

• [SLOW TEST:36.112 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":192,"skipped":3237,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:31:26.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8794.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8794.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 22:31:38.554: INFO: DNS probes using dns-8794/dns-test-1db631ff-4ce2-45b7-967a-d50253f5ec4f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:31:38.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8794" for this suite.

• [SLOW TEST:12.566 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":193,"skipped":3255,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:31:38.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Jan 31 22:31:38.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-739'
Jan 31 22:31:39.150: INFO: stderr: ""
Jan 31 22:31:39.150: INFO: stdout: "pod/pause created\n"
Jan 31 22:31:39.150: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 31 22:31:39.151: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-739" to be "running and ready"
Jan 31 22:31:39.224: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 73.105878ms
Jan 31 22:31:41.229: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078158291s
Jan 31 22:31:43.290: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139885908s
Jan 31 22:31:45.300: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149273371s
Jan 31 22:31:47.314: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.163678553s
Jan 31 22:31:47.315: INFO: Pod "pause" satisfied condition "running and ready"
Jan 31 22:31:47.315: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 31 22:31:47.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-739'
Jan 31 22:31:47.466: INFO: stderr: ""
Jan 31 22:31:47.466: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 31 22:31:47.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-739'
Jan 31 22:31:47.576: INFO: stderr: ""
Jan 31 22:31:47.576: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 31 22:31:47.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-739'
Jan 31 22:31:47.670: INFO: stderr: ""
Jan 31 22:31:47.670: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 31 22:31:47.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-739'
Jan 31 22:31:47.766: INFO: stderr: ""
Jan 31 22:31:47.766: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Jan 31 22:31:47.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-739'
Jan 31 22:31:47.907: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:31:47.908: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 31 22:31:47.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-739'
Jan 31 22:31:48.144: INFO: stderr: "No resources found in kubectl-739 namespace.\n"
Jan 31 22:31:48.145: INFO: stdout: ""
Jan 31 22:31:48.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-739 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 22:31:48.228: INFO: stderr: ""
Jan 31 22:31:48.228: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:31:48.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-739" for this suite.

• [SLOW TEST:9.520 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":194,"skipped":3267,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:31:48.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8745
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-8745
I0131 22:31:48.729619       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8745, replica count: 2
I0131 22:31:51.781506       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:31:54.782112       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:31:57.782693       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:32:00.783398       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 22:32:00.783: INFO: Creating new exec pod
Jan 31 22:32:07.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8745 execpodr7b6p -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 31 22:32:08.279: INFO: stderr: "I0131 22:32:08.053840    3114 log.go:172] (0xc0006be630) (0xc00066db80) Create stream\nI0131 22:32:08.054093    3114 log.go:172] (0xc0006be630) (0xc00066db80) Stream added, broadcasting: 1\nI0131 22:32:08.061190    3114 log.go:172] (0xc0006be630) Reply frame received for 1\nI0131 22:32:08.061229    3114 log.go:172] (0xc0006be630) (0xc000662000) Create stream\nI0131 22:32:08.061242    3114 log.go:172] (0xc0006be630) (0xc000662000) Stream added, broadcasting: 3\nI0131 22:32:08.063491    3114 log.go:172] (0xc0006be630) Reply frame received for 3\nI0131 22:32:08.063582    3114 log.go:172] (0xc0006be630) (0xc000226000) Create stream\nI0131 22:32:08.063596    3114 log.go:172] (0xc0006be630) (0xc000226000) Stream added, broadcasting: 5\nI0131 22:32:08.065070    3114 log.go:172] (0xc0006be630) Reply frame received for 5\nI0131 22:32:08.167368    3114 log.go:172] (0xc0006be630) Data frame received for 5\nI0131 22:32:08.167555    3114 log.go:172] (0xc000226000) (5) Data frame handling\nI0131 22:32:08.167583    3114 log.go:172] (0xc000226000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0131 22:32:08.175886    3114 log.go:172] (0xc0006be630) Data frame received for 5\nI0131 22:32:08.175995    3114 log.go:172] (0xc000226000) (5) Data frame handling\nI0131 22:32:08.176048    3114 log.go:172] (0xc000226000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0131 22:32:08.267560    3114 log.go:172] (0xc0006be630) Data frame received for 1\nI0131 22:32:08.267686    3114 log.go:172] (0xc00066db80) (1) Data frame handling\nI0131 22:32:08.267715    3114 log.go:172] (0xc00066db80) (1) Data frame sent\nI0131 22:32:08.268121    3114 log.go:172] (0xc0006be630) (0xc000662000) Stream removed, broadcasting: 3\nI0131 22:32:08.268198    3114 log.go:172] (0xc0006be630) (0xc00066db80) Stream removed, broadcasting: 1\nI0131 22:32:08.268605    3114 log.go:172] (0xc0006be630) (0xc000226000) Stream removed, broadcasting: 5\nI0131 22:32:08.269685    3114 log.go:172] (0xc0006be630) (0xc00066db80) Stream removed, broadcasting: 1\nI0131 22:32:08.269709    3114 log.go:172] (0xc0006be630) (0xc000662000) Stream removed, broadcasting: 3\nI0131 22:32:08.269729    3114 log.go:172] (0xc0006be630) (0xc000226000) Stream removed, broadcasting: 5\n"
Jan 31 22:32:08.279: INFO: stdout: ""
Jan 31 22:32:08.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8745 execpodr7b6p -- /bin/sh -x -c nc -zv -t -w 2 10.96.113.7 80'
Jan 31 22:32:08.651: INFO: stderr: "I0131 22:32:08.483783    3135 log.go:172] (0xc000b2c0b0) (0xc000652500) Create stream\nI0131 22:32:08.484092    3135 log.go:172] (0xc000b2c0b0) (0xc000652500) Stream added, broadcasting: 1\nI0131 22:32:08.489445    3135 log.go:172] (0xc000b2c0b0) Reply frame received for 1\nI0131 22:32:08.489551    3135 log.go:172] (0xc000b2c0b0) (0xc000725900) Create stream\nI0131 22:32:08.489569    3135 log.go:172] (0xc000b2c0b0) (0xc000725900) Stream added, broadcasting: 3\nI0131 22:32:08.491994    3135 log.go:172] (0xc000b2c0b0) Reply frame received for 3\nI0131 22:32:08.492070    3135 log.go:172] (0xc000b2c0b0) (0xc000b12000) Create stream\nI0131 22:32:08.492088    3135 log.go:172] (0xc000b2c0b0) (0xc000b12000) Stream added, broadcasting: 5\nI0131 22:32:08.493678    3135 log.go:172] (0xc000b2c0b0) Reply frame received for 5\nI0131 22:32:08.563922    3135 log.go:172] (0xc000b2c0b0) Data frame received for 5\nI0131 22:32:08.564069    3135 log.go:172] (0xc000b12000) (5) Data frame handling\nI0131 22:32:08.564110    3135 log.go:172] (0xc000b12000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.113.7 80\nI0131 22:32:08.568858    3135 log.go:172] (0xc000b2c0b0) Data frame received for 5\nI0131 22:32:08.568977    3135 log.go:172] (0xc000b12000) (5) Data frame handling\nI0131 22:32:08.569034    3135 log.go:172] (0xc000b12000) (5) Data frame sent\nConnection to 10.96.113.7 80 port [tcp/http] succeeded!\nI0131 22:32:08.638222    3135 log.go:172] (0xc000b2c0b0) Data frame received for 1\nI0131 22:32:08.638304    3135 log.go:172] (0xc000b2c0b0) (0xc000725900) Stream removed, broadcasting: 3\nI0131 22:32:08.638406    3135 log.go:172] (0xc000652500) (1) Data frame handling\nI0131 22:32:08.638441    3135 log.go:172] (0xc000652500) (1) Data frame sent\nI0131 22:32:08.638454    3135 log.go:172] (0xc000b2c0b0) (0xc000652500) Stream removed, broadcasting: 1\nI0131 22:32:08.638786    3135 log.go:172] (0xc000b2c0b0) (0xc000b12000) Stream removed, broadcasting: 5\nI0131 22:32:08.638847    3135 log.go:172] (0xc000b2c0b0) Go away received\nI0131 22:32:08.639228    3135 log.go:172] (0xc000b2c0b0) (0xc000652500) Stream removed, broadcasting: 1\nI0131 22:32:08.639245    3135 log.go:172] (0xc000b2c0b0) (0xc000725900) Stream removed, broadcasting: 3\nI0131 22:32:08.639252    3135 log.go:172] (0xc000b2c0b0) (0xc000b12000) Stream removed, broadcasting: 5\n"
Jan 31 22:32:08.651: INFO: stdout: ""
Jan 31 22:32:08.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8745 execpodr7b6p -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30578'
Jan 31 22:32:08.981: INFO: stderr: "I0131 22:32:08.812280    3156 log.go:172] (0xc00011d340) (0xc0008c61e0) Create stream\nI0131 22:32:08.812502    3156 log.go:172] (0xc00011d340) (0xc0008c61e0) Stream added, broadcasting: 1\nI0131 22:32:08.816171    3156 log.go:172] (0xc00011d340) Reply frame received for 1\nI0131 22:32:08.816231    3156 log.go:172] (0xc00011d340) (0xc00051f5e0) Create stream\nI0131 22:32:08.816242    3156 log.go:172] (0xc00011d340) (0xc00051f5e0) Stream added, broadcasting: 3\nI0131 22:32:08.817132    3156 log.go:172] (0xc00011d340) Reply frame received for 3\nI0131 22:32:08.817155    3156 log.go:172] (0xc00011d340) (0xc000b2a000) Create stream\nI0131 22:32:08.817164    3156 log.go:172] (0xc00011d340) (0xc000b2a000) Stream added, broadcasting: 5\nI0131 22:32:08.818272    3156 log.go:172] (0xc00011d340) Reply frame received for 5\nI0131 22:32:08.889901    3156 log.go:172] (0xc00011d340) Data frame received for 5\nI0131 22:32:08.889978    3156 log.go:172] (0xc000b2a000) (5) Data frame handling\nI0131 22:32:08.889999    3156 log.go:172] (0xc000b2a000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30578\nI0131 22:32:08.890190    3156 log.go:172] (0xc00011d340) Data frame received for 5\nI0131 22:32:08.890204    3156 log.go:172] (0xc000b2a000) (5) Data frame handling\nI0131 22:32:08.890215    3156 log.go:172] (0xc000b2a000) (5) Data frame sent\nConnection to 10.96.2.250 30578 port [tcp/30578] succeeded!\nI0131 22:32:08.970982    3156 log.go:172] (0xc00011d340) Data frame received for 1\nI0131 22:32:08.971353    3156 log.go:172] (0xc0008c61e0) (1) Data frame handling\nI0131 22:32:08.971387    3156 log.go:172] (0xc0008c61e0) (1) Data frame sent\nI0131 22:32:08.971974    3156 log.go:172] (0xc00011d340) (0xc000b2a000) Stream removed, broadcasting: 5\nI0131 22:32:08.972155    3156 log.go:172] (0xc00011d340) (0xc0008c61e0) Stream removed, broadcasting: 1\nI0131 22:32:08.972254    3156 log.go:172] (0xc00011d340) (0xc00051f5e0) Stream removed, broadcasting: 3\nI0131 22:32:08.972717    3156 log.go:172] (0xc00011d340) Go away received\nI0131 22:32:08.973177    3156 log.go:172] (0xc00011d340) (0xc0008c61e0) Stream removed, broadcasting: 1\nI0131 22:32:08.973190    3156 log.go:172] (0xc00011d340) (0xc00051f5e0) Stream removed, broadcasting: 3\nI0131 22:32:08.973197    3156 log.go:172] (0xc00011d340) (0xc000b2a000) Stream removed, broadcasting: 5\n"
Jan 31 22:32:08.982: INFO: stdout: ""
Jan 31 22:32:08.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8745 execpodr7b6p -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30578'
Jan 31 22:32:09.343: INFO: stderr: "I0131 22:32:09.154046    3177 log.go:172] (0xc000afb550) (0xc000aa8500) Create stream\nI0131 22:32:09.154228    3177 log.go:172] (0xc000afb550) (0xc000aa8500) Stream added, broadcasting: 1\nI0131 22:32:09.157622    3177 log.go:172] (0xc000afb550) Reply frame received for 1\nI0131 22:32:09.157706    3177 log.go:172] (0xc000afb550) (0xc000ad43c0) Create stream\nI0131 22:32:09.157719    3177 log.go:172] (0xc000afb550) (0xc000ad43c0) Stream added, broadcasting: 3\nI0131 22:32:09.159755    3177 log.go:172] (0xc000afb550) Reply frame received for 3\nI0131 22:32:09.159791    3177 log.go:172] (0xc000afb550) (0xc000ab60a0) Create stream\nI0131 22:32:09.159805    3177 log.go:172] (0xc000afb550) (0xc000ab60a0) Stream added, broadcasting: 5\nI0131 22:32:09.160996    3177 log.go:172] (0xc000afb550) Reply frame received for 5\nI0131 22:32:09.250445    3177 log.go:172] (0xc000afb550) Data frame received for 5\nI0131 22:32:09.250511    3177 log.go:172] (0xc000ab60a0) (5) Data frame handling\nI0131 22:32:09.250573    3177 log.go:172] (0xc000ab60a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30578\nI0131 22:32:09.252652    3177 log.go:172] (0xc000afb550) Data frame received for 5\nI0131 22:32:09.252713    3177 log.go:172] (0xc000ab60a0) (5) Data frame handling\nI0131 22:32:09.252734    3177 log.go:172] (0xc000ab60a0) (5) Data frame sent\nConnection to 10.96.1.234 30578 port [tcp/30578] succeeded!\nI0131 22:32:09.329465    3177 log.go:172] (0xc000afb550) (0xc000ab60a0) Stream removed, broadcasting: 5\nI0131 22:32:09.329606    3177 log.go:172] (0xc000afb550) Data frame received for 1\nI0131 22:32:09.329655    3177 log.go:172] (0xc000afb550) (0xc000ad43c0) Stream removed, broadcasting: 3\nI0131 22:32:09.329694    3177 log.go:172] (0xc000aa8500) (1) Data frame handling\nI0131 22:32:09.329729    3177 log.go:172] (0xc000aa8500) (1) Data frame sent\nI0131 22:32:09.329743    3177 log.go:172] (0xc000afb550) (0xc000aa8500) Stream removed, broadcasting: 1\nI0131 22:32:09.329769    3177 log.go:172] (0xc000afb550) Go away received\nI0131 22:32:09.330739    3177 log.go:172] (0xc000afb550) (0xc000aa8500) Stream removed, broadcasting: 1\nI0131 22:32:09.330759    3177 log.go:172] (0xc000afb550) (0xc000ad43c0) Stream removed, broadcasting: 3\nI0131 22:32:09.330768    3177 log.go:172] (0xc000afb550) (0xc000ab60a0) Stream removed, broadcasting: 5\n"
Jan 31 22:32:09.343: INFO: stdout: ""
Jan 31 22:32:09.343: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:32:09.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8745" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.204 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":195,"skipped":3269,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:32:09.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 31 22:32:09.491: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 22:32:09.510: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 22:32:09.512: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 22:32:09.518: INFO: execpodr7b6p from services-8745 started at 2020-01-31 22:32:00 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.519: INFO: 	Container agnhost-pause ready: true, restart count 0
Jan 31 22:32:09.519: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.519: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 22:32:09.519: INFO: externalname-service-2sf84 from services-8745 started at 2020-01-31 22:31:49 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.519: INFO: 	Container externalname-service ready: true, restart count 0
Jan 31 22:32:09.519: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 22:32:09.519: INFO: 	Container weave ready: true, restart count 1
Jan 31 22:32:09.519: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 22:32:09.519: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 22:32:09.539: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 22:32:09.539: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container etcd ready: true, restart count 1
Jan 31 22:32:09.539: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container coredns ready: true, restart count 0
Jan 31 22:32:09.539: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container coredns ready: true, restart count 0
Jan 31 22:32:09.539: INFO: externalname-service-fssp5 from services-8745 started at 2020-01-31 22:31:48 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container externalname-service ready: true, restart count 0
Jan 31 22:32:09.539: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 22:32:09.539: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 22:32:09.539: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container weave ready: true, restart count 0
Jan 31 22:32:09.539: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 22:32:09.539: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 22:32:09.539: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ef196833f489e3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:32:10.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7108" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":196,"skipped":3273,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:32:10.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-b0767742-0962-41e5-8000-a4de1a396c56
STEP: Creating a pod to test consume configMaps
Jan 31 22:32:10.711: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9" in namespace "configmap-8207" to be "success or failure"
Jan 31 22:32:10.757: INFO: Pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9": Phase="Pending", Reason="", readiness=false. Elapsed: 45.366026ms
Jan 31 22:32:12.764: INFO: Pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052677068s
Jan 31 22:32:14.825: INFO: Pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113894477s
Jan 31 22:32:16.835: INFO: Pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124184849s
Jan 31 22:32:18.968: INFO: Pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257269798s
Jan 31 22:32:21.655: INFO: Pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.943888028s
Jan 31 22:32:23.662: INFO: Pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.950801043s
STEP: Saw pod success
Jan 31 22:32:23.662: INFO: Pod "pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9" satisfied condition "success or failure"
Jan 31 22:32:23.666: INFO: Trying to get logs from node jerma-node pod pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9 container configmap-volume-test: 
STEP: delete the pod
Jan 31 22:32:23.711: INFO: Waiting for pod pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9 to disappear
Jan 31 22:32:23.719: INFO: Pod pod-configmaps-1b188d51-19b9-4409-87ea-70f406c285d9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:32:23.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8207" for this suite.

• [SLOW TEST:13.142 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3275,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:32:23.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:32:23.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca" in namespace "projected-5109" to be "success or failure"
Jan 31 22:32:23.981: INFO: Pod "downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca": Phase="Pending", Reason="", readiness=false. Elapsed: 67.735301ms
Jan 31 22:32:25.992: INFO: Pod "downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0779484s
Jan 31 22:32:27.997: INFO: Pod "downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083805181s
Jan 31 22:32:30.003: INFO: Pod "downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089553737s
Jan 31 22:32:32.013: INFO: Pod "downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099354035s
STEP: Saw pod success
Jan 31 22:32:32.013: INFO: Pod "downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca" satisfied condition "success or failure"
Jan 31 22:32:32.017: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca container client-container: 
STEP: delete the pod
Jan 31 22:32:32.176: INFO: Waiting for pod downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca to disappear
Jan 31 22:32:32.184: INFO: Pod downwardapi-volume-a7cc1172-d712-4345-ad97-ebc1e057eaca no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:32:32.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5109" for this suite.

• [SLOW TEST:8.487 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3278,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:32:32.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1050.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1050.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1050.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1050.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1050.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1050.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 22:32:44.445: INFO: DNS probes using dns-1050/dns-test-96e858fb-af6d-43b6-a7c0-beb905424a77 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:32:44.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1050" for this suite.

• [SLOW TEST:12.403 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":199,"skipped":3301,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:32:44.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:32:45.222: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:32:47.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:32:49.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:32:51.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:32:53.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:32:55.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106765, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:32:58.319: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:32:59.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7551" for this suite.
STEP: Destroying namespace "webhook-7551-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.612 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":200,"skipped":3321,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:32:59.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 31 22:32:59.297: INFO: Waiting up to 5m0s for pod "downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d" in namespace "downward-api-4545" to be "success or failure"
Jan 31 22:32:59.340: INFO: Pod "downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.662181ms
Jan 31 22:33:01.344: INFO: Pod "downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046478626s
Jan 31 22:33:03.351: INFO: Pod "downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05337468s
Jan 31 22:33:05.358: INFO: Pod "downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060596338s
Jan 31 22:33:07.365: INFO: Pod "downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068054385s
Jan 31 22:33:09.372: INFO: Pod "downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074734257s
STEP: Saw pod success
Jan 31 22:33:09.372: INFO: Pod "downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d" satisfied condition "success or failure"
Jan 31 22:33:09.376: INFO: Trying to get logs from node jerma-node pod downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d container dapi-container: 
STEP: delete the pod
Jan 31 22:33:09.439: INFO: Waiting for pod downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d to disappear
Jan 31 22:33:09.444: INFO: Pod downward-api-01026fcc-5581-442e-854b-faefcc1e8b2d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:33:09.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4545" for this suite.

• [SLOW TEST:10.214 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3322,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:33:09.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0131 22:33:21.646893       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 22:33:21.647: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:33:21.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-132" for this suite.

• [SLOW TEST:12.984 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":202,"skipped":3327,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:33:22.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-091ea850-7071-4c04-b5d4-90d3c0caaf30
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:33:43.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2683" for this suite.

• [SLOW TEST:20.932 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3333,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:33:43.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:33:44.239: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:33:46.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:33:48.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:33:50.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:33:52.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:33:55.294: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:33:55.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9391" for this suite.
STEP: Destroying namespace "webhook-9391-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.297 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":204,"skipped":3353,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:33:55.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:33:55.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8" in namespace "downward-api-2827" to be "success or failure"
Jan 31 22:33:55.837: INFO: Pod "downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.309287ms
Jan 31 22:33:57.844: INFO: Pod "downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014113218s
Jan 31 22:33:59.857: INFO: Pod "downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027256191s
Jan 31 22:34:01.870: INFO: Pod "downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040440793s
Jan 31 22:34:03.881: INFO: Pod "downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051119707s
Jan 31 22:34:05.915: INFO: Pod "downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085701488s
STEP: Saw pod success
Jan 31 22:34:05.915: INFO: Pod "downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8" satisfied condition "success or failure"
Jan 31 22:34:05.919: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8 container client-container: 
STEP: delete the pod
Jan 31 22:34:06.080: INFO: Waiting for pod downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8 to disappear
Jan 31 22:34:06.086: INFO: Pod downwardapi-volume-fd415b33-f226-4b04-b787-b3e427c622d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:34:06.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2827" for this suite.

• [SLOW TEST:10.430 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3371,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:34:06.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-970816f9-cd29-4d63-bcf8-4f3c960039b5
STEP: Creating a pod to test consume configMaps
Jan 31 22:34:06.611: INFO: Waiting up to 5m0s for pod "pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c" in namespace "configmap-1745" to be "success or failure"
Jan 31 22:34:06.627: INFO: Pod "pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.038416ms
Jan 31 22:34:08.636: INFO: Pod "pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024548368s
Jan 31 22:34:11.587: INFO: Pod "pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.975827625s
Jan 31 22:34:13.593: INFO: Pod "pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.981466437s
Jan 31 22:34:15.604: INFO: Pod "pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.993409712s
STEP: Saw pod success
Jan 31 22:34:15.605: INFO: Pod "pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c" satisfied condition "success or failure"
Jan 31 22:34:15.687: INFO: Trying to get logs from node jerma-node pod pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c container configmap-volume-test: 
STEP: delete the pod
Jan 31 22:34:15.772: INFO: Waiting for pod pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c to disappear
Jan 31 22:34:15.778: INFO: Pod pod-configmaps-65395bd4-270e-49a1-9ffc-c9fd1743de6c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:34:15.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1745" for this suite.

• [SLOW TEST:9.723 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3396,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:34:15.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-dce79311-afe9-45e4-ad30-d4430c728996
STEP: Creating a pod to test consume secrets
Jan 31 22:34:16.023: INFO: Waiting up to 5m0s for pod "pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560" in namespace "secrets-8995" to be "success or failure"
Jan 31 22:34:16.076: INFO: Pod "pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560": Phase="Pending", Reason="", readiness=false. Elapsed: 53.52184ms
Jan 31 22:34:18.081: INFO: Pod "pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057781756s
Jan 31 22:34:20.086: INFO: Pod "pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062663484s
Jan 31 22:34:22.095: INFO: Pod "pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071890993s
Jan 31 22:34:24.103: INFO: Pod "pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079927605s
Jan 31 22:34:26.113: INFO: Pod "pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089846892s
STEP: Saw pod success
Jan 31 22:34:26.113: INFO: Pod "pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560" satisfied condition "success or failure"
Jan 31 22:34:26.117: INFO: Trying to get logs from node jerma-node pod pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560 container secret-volume-test: 
STEP: delete the pod
Jan 31 22:34:26.165: INFO: Waiting for pod pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560 to disappear
Jan 31 22:34:26.168: INFO: Pod pod-secrets-f98a00f8-6800-40d1-8fc5-1a7763ce5560 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:34:26.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8995" for this suite.

• [SLOW TEST:10.352 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3431,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:34:26.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 31 22:34:26.341: INFO: Waiting up to 5m0s for pod "pod-64298439-e666-4bd7-ac4c-cddbefbbf878" in namespace "emptydir-8909" to be "success or failure"
Jan 31 22:34:26.365: INFO: Pod "pod-64298439-e666-4bd7-ac4c-cddbefbbf878": Phase="Pending", Reason="", readiness=false. Elapsed: 24.105808ms
Jan 31 22:34:28.372: INFO: Pod "pod-64298439-e666-4bd7-ac4c-cddbefbbf878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031120375s
Jan 31 22:34:30.380: INFO: Pod "pod-64298439-e666-4bd7-ac4c-cddbefbbf878": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038591793s
Jan 31 22:34:32.385: INFO: Pod "pod-64298439-e666-4bd7-ac4c-cddbefbbf878": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044496364s
Jan 31 22:34:34.392: INFO: Pod "pod-64298439-e666-4bd7-ac4c-cddbefbbf878": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051512077s
STEP: Saw pod success
Jan 31 22:34:34.393: INFO: Pod "pod-64298439-e666-4bd7-ac4c-cddbefbbf878" satisfied condition "success or failure"
Jan 31 22:34:34.396: INFO: Trying to get logs from node jerma-node pod pod-64298439-e666-4bd7-ac4c-cddbefbbf878 container test-container: 
STEP: delete the pod
Jan 31 22:34:34.468: INFO: Waiting for pod pod-64298439-e666-4bd7-ac4c-cddbefbbf878 to disappear
Jan 31 22:34:34.475: INFO: Pod pod-64298439-e666-4bd7-ac4c-cddbefbbf878 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:34:34.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8909" for this suite.

• [SLOW TEST:8.399 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:34:34.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 31 22:34:34.738: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2500 /api/v1/namespaces/watch-2500/configmaps/e2e-watch-test-resource-version 19718cac-d6dd-4496-871e-29fb84705c17 5614620 0 2020-01-31 22:34:34 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 22:34:34.738: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2500 /api/v1/namespaces/watch-2500/configmaps/e2e-watch-test-resource-version 19718cac-d6dd-4496-871e-29fb84705c17 5614621 0 2020-01-31 22:34:34 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:34:34.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2500" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":209,"skipped":3466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:34:34.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:34:34.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69" in namespace "projected-4245" to be "success or failure"
Jan 31 22:34:34.930: INFO: Pod "downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69": Phase="Pending", Reason="", readiness=false. Elapsed: 30.664442ms
Jan 31 22:34:36.937: INFO: Pod "downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037482584s
Jan 31 22:34:38.944: INFO: Pod "downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044598317s
Jan 31 22:34:40.952: INFO: Pod "downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69": Phase="Running", Reason="", readiness=true. Elapsed: 6.052257894s
Jan 31 22:34:42.958: INFO: Pod "downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058379324s
STEP: Saw pod success
Jan 31 22:34:42.958: INFO: Pod "downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69" satisfied condition "success or failure"
Jan 31 22:34:42.962: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69 container client-container: 
STEP: delete the pod
Jan 31 22:34:43.401: INFO: Waiting for pod downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69 to disappear
Jan 31 22:34:43.415: INFO: Pod downwardapi-volume-149066fa-afdd-4709-b2e5-62e3f1824f69 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:34:43.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4245" for this suite.

• [SLOW TEST:8.711 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3520,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:34:43.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:34:43.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5460" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":211,"skipped":3528,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:34:43.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:34:43.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8" in namespace "downward-api-9336" to be "success or failure"
Jan 31 22:34:43.947: INFO: Pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8": Phase="Pending", Reason="", readiness=false. Elapsed: 132.475605ms
Jan 31 22:34:45.954: INFO: Pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140200219s
Jan 31 22:34:47.962: INFO: Pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147675548s
Jan 31 22:34:49.969: INFO: Pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154816736s
Jan 31 22:34:51.976: INFO: Pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162000458s
Jan 31 22:34:53.981: INFO: Pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166574356s
Jan 31 22:34:55.985: INFO: Pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.170951497s
STEP: Saw pod success
Jan 31 22:34:55.985: INFO: Pod "downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8" satisfied condition "success or failure"
Jan 31 22:34:55.988: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8 container client-container: 
STEP: delete the pod
Jan 31 22:34:56.044: INFO: Waiting for pod downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8 to disappear
Jan 31 22:34:56.054: INFO: Pod downwardapi-volume-70983988-f8f7-4fee-a24a-4934e19efed8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:34:56.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9336" for this suite.

• [SLOW TEST:12.374 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3533,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:34:56.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-4eb4a906-0def-4b2d-b73e-fde2e135ea34
STEP: Creating a pod to test consume configMaps
Jan 31 22:34:56.314: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93" in namespace "projected-6069" to be "success or failure"
Jan 31 22:34:56.343: INFO: Pod "pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93": Phase="Pending", Reason="", readiness=false. Elapsed: 29.707619ms
Jan 31 22:34:58.351: INFO: Pod "pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037173387s
Jan 31 22:35:00.360: INFO: Pod "pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046208864s
Jan 31 22:35:02.375: INFO: Pod "pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060985408s
Jan 31 22:35:04.383: INFO: Pod "pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06905816s
STEP: Saw pod success
Jan 31 22:35:04.383: INFO: Pod "pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93" satisfied condition "success or failure"
Jan 31 22:35:04.390: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 22:35:04.513: INFO: Waiting for pod pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93 to disappear
Jan 31 22:35:04.520: INFO: Pod pod-projected-configmaps-d312e80d-ea75-4b6c-a7bf-95f0bb569c93 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:35:04.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6069" for this suite.

• [SLOW TEST:8.463 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3533,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:35:04.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-27e66330-2e4f-4d59-8265-0e7d18c816d6
STEP: Creating a pod to test consume configMaps
Jan 31 22:35:04.651: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0" in namespace "projected-1096" to be "success or failure"
Jan 31 22:35:04.681: INFO: Pod "pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.460562ms
Jan 31 22:35:06.688: INFO: Pod "pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037195952s
Jan 31 22:35:08.715: INFO: Pod "pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063606474s
Jan 31 22:35:10.723: INFO: Pod "pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071848247s
Jan 31 22:35:12.730: INFO: Pod "pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078911978s
STEP: Saw pod success
Jan 31 22:35:12.730: INFO: Pod "pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0" satisfied condition "success or failure"
Jan 31 22:35:12.734: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 22:35:12.819: INFO: Waiting for pod pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0 to disappear
Jan 31 22:35:12.916: INFO: Pod pod-projected-configmaps-1f3318ce-dc77-4a80-a2e3-11b38a45a1d0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:35:12.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1096" for this suite.

• [SLOW TEST:8.397 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3541,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:35:12.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-c62abbb9-f647-4bf0-9900-619bda3dba74
STEP: Creating a pod to test consume configMaps
Jan 31 22:35:13.010: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd" in namespace "projected-6079" to be "success or failure"
Jan 31 22:35:13.056: INFO: Pod "pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.035049ms
Jan 31 22:35:15.098: INFO: Pod "pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08713309s
Jan 31 22:35:17.107: INFO: Pod "pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096632545s
Jan 31 22:35:19.112: INFO: Pod "pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101198967s
Jan 31 22:35:21.123: INFO: Pod "pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113108706s
STEP: Saw pod success
Jan 31 22:35:21.124: INFO: Pod "pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd" satisfied condition "success or failure"
Jan 31 22:35:21.201: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 22:35:21.255: INFO: Waiting for pod pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd to disappear
Jan 31 22:35:21.272: INFO: Pod pod-projected-configmaps-0c6592c7-933f-4f94-acc1-0df6b498e1dd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:35:21.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6079" for this suite.

• [SLOW TEST:8.355 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3586,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:35:21.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:35:22.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:35:24.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:35:26.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:35:28.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106922, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:35:31.493: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:35:41.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9733" for this suite.
STEP: Destroying namespace "webhook-9733-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.778 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":216,"skipped":3587,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:35:42.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:35:42.919: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:35:44.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:35:46.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:35:48.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:35:51.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716106942, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:35:53.997: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:35:54.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:35:55.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8309" for this suite.
STEP: Destroying namespace "webhook-8309-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.494 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":217,"skipped":3628,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:35:55.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 31 22:36:06.365: INFO: Successfully updated pod "pod-update-319c8d70-dd0c-45f4-b3d3-6c16464724df"
STEP: verifying the updated pod is in kubernetes
Jan 31 22:36:06.386: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:36:06.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5085" for this suite.

• [SLOW TEST:10.851 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3642,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:36:06.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3828.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3828.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 22:36:18.607: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:18.618: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:18.633: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:18.648: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:18.677: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:18.689: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:18.709: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:18.718: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:18.799: INFO: Lookups using dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local]

Jan 31 22:36:23.860: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:23.877: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:23.883: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:23.887: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:23.900: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:23.904: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:23.910: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:23.914: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:23.922: INFO: Lookups using dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local]

Jan 31 22:36:28.829: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:28.834: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:28.837: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:28.841: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:28.874: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:28.879: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:28.888: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:28.893: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:28.947: INFO: Lookups using dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local]

Jan 31 22:36:33.810: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:33.822: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:33.828: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:33.834: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:33.848: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:33.853: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:33.868: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:33.920: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:33.932: INFO: Lookups using dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local]

Jan 31 22:36:38.806: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:38.814: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:38.820: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:38.826: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:38.840: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:38.845: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:38.851: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:38.856: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:38.873: INFO: Lookups using dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local]

Jan 31 22:36:43.841: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:43.845: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:43.855: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:43.868: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:43.886: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:43.893: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:43.902: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:43.912: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local from pod dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0: the server could not find the requested resource (get pods dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0)
Jan 31 22:36:43.926: INFO: Lookups using dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3828.svc.cluster.local jessie_udp@dns-test-service-2.dns-3828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3828.svc.cluster.local]

Jan 31 22:36:48.851: INFO: DNS probes using dns-3828/dns-test-83e350b6-4d7f-44be-a68f-f7ac52d830e0 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:36:49.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3828" for this suite.

• [SLOW TEST:42.775 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":219,"skipped":3648,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:36:49.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 31 22:36:49.266: INFO: Created pod &Pod{ObjectMeta:{dns-612  dns-612 /api/v1/namespaces/dns-612/pods/dns-612 1f1cb146-b374-4daa-aa87-25525d3e7c34 5615298 0 2020-01-31 22:36:49 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ltp6x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ltp6x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ltp6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 31 22:36:59.332: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-612 PodName:dns-612 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:36:59.332: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:36:59.398598       8 log.go:172] (0xc0029fa580) (0xc001d7ce60) Create stream
I0131 22:36:59.399092       8 log.go:172] (0xc0029fa580) (0xc001d7ce60) Stream added, broadcasting: 1
I0131 22:36:59.403217       8 log.go:172] (0xc0029fa580) Reply frame received for 1
I0131 22:36:59.403264       8 log.go:172] (0xc0029fa580) (0xc000a09e00) Create stream
I0131 22:36:59.403277       8 log.go:172] (0xc0029fa580) (0xc000a09e00) Stream added, broadcasting: 3
I0131 22:36:59.405091       8 log.go:172] (0xc0029fa580) Reply frame received for 3
I0131 22:36:59.405124       8 log.go:172] (0xc0029fa580) (0xc002057900) Create stream
I0131 22:36:59.405138       8 log.go:172] (0xc0029fa580) (0xc002057900) Stream added, broadcasting: 5
I0131 22:36:59.407363       8 log.go:172] (0xc0029fa580) Reply frame received for 5
I0131 22:36:59.535102       8 log.go:172] (0xc0029fa580) Data frame received for 3
I0131 22:36:59.535334       8 log.go:172] (0xc000a09e00) (3) Data frame handling
I0131 22:36:59.535367       8 log.go:172] (0xc000a09e00) (3) Data frame sent
I0131 22:36:59.673117       8 log.go:172] (0xc0029fa580) (0xc000a09e00) Stream removed, broadcasting: 3
I0131 22:36:59.673279       8 log.go:172] (0xc0029fa580) Data frame received for 1
I0131 22:36:59.673309       8 log.go:172] (0xc001d7ce60) (1) Data frame handling
I0131 22:36:59.673336       8 log.go:172] (0xc001d7ce60) (1) Data frame sent
I0131 22:36:59.673353       8 log.go:172] (0xc0029fa580) (0xc002057900) Stream removed, broadcasting: 5
I0131 22:36:59.673421       8 log.go:172] (0xc0029fa580) (0xc001d7ce60) Stream removed, broadcasting: 1
I0131 22:36:59.673825       8 log.go:172] (0xc0029fa580) Go away received
I0131 22:36:59.674039       8 log.go:172] (0xc0029fa580) (0xc001d7ce60) Stream removed, broadcasting: 1
I0131 22:36:59.674074       8 log.go:172] (0xc0029fa580) (0xc000a09e00) Stream removed, broadcasting: 3
I0131 22:36:59.674081       8 log.go:172] (0xc0029fa580) (0xc002057900) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 31 22:36:59.674: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-612 PodName:dns-612 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:36:59.674: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:36:59.745556       8 log.go:172] (0xc0026d2d10) (0xc001d09ea0) Create stream
I0131 22:36:59.745717       8 log.go:172] (0xc0026d2d10) (0xc001d09ea0) Stream added, broadcasting: 1
I0131 22:36:59.750627       8 log.go:172] (0xc0026d2d10) Reply frame received for 1
I0131 22:36:59.750678       8 log.go:172] (0xc0026d2d10) (0xc001d7cf00) Create stream
I0131 22:36:59.750693       8 log.go:172] (0xc0026d2d10) (0xc001d7cf00) Stream added, broadcasting: 3
I0131 22:36:59.751670       8 log.go:172] (0xc0026d2d10) Reply frame received for 3
I0131 22:36:59.751696       8 log.go:172] (0xc0026d2d10) (0xc002892000) Create stream
I0131 22:36:59.751717       8 log.go:172] (0xc0026d2d10) (0xc002892000) Stream added, broadcasting: 5
I0131 22:36:59.752966       8 log.go:172] (0xc0026d2d10) Reply frame received for 5
I0131 22:36:59.886754       8 log.go:172] (0xc0026d2d10) Data frame received for 3
I0131 22:36:59.886934       8 log.go:172] (0xc001d7cf00) (3) Data frame handling
I0131 22:36:59.886977       8 log.go:172] (0xc001d7cf00) (3) Data frame sent
I0131 22:37:00.008343       8 log.go:172] (0xc0026d2d10) Data frame received for 1
I0131 22:37:00.008748       8 log.go:172] (0xc001d09ea0) (1) Data frame handling
I0131 22:37:00.008785       8 log.go:172] (0xc001d09ea0) (1) Data frame sent
I0131 22:37:00.009355       8 log.go:172] (0xc0026d2d10) (0xc002892000) Stream removed, broadcasting: 5
I0131 22:37:00.009517       8 log.go:172] (0xc0026d2d10) (0xc001d09ea0) Stream removed, broadcasting: 1
I0131 22:37:00.010010       8 log.go:172] (0xc0026d2d10) (0xc001d7cf00) Stream removed, broadcasting: 3
I0131 22:37:00.010049       8 log.go:172] (0xc0026d2d10) (0xc001d09ea0) Stream removed, broadcasting: 1
I0131 22:37:00.010088       8 log.go:172] (0xc0026d2d10) (0xc001d7cf00) Stream removed, broadcasting: 3
I0131 22:37:00.010095       8 log.go:172] (0xc0026d2d10) (0xc002892000) Stream removed, broadcasting: 5
I0131 22:37:00.010137       8 log.go:172] (0xc0026d2d10) Go away received
Jan 31 22:37:00.010: INFO: Deleting pod dns-612...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:37:00.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-612" for this suite.

• [SLOW TEST:10.912 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":220,"skipped":3660,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:37:00.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 31 22:37:00.237: INFO: Waiting up to 5m0s for pod "downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c" in namespace "downward-api-2396" to be "success or failure"
Jan 31 22:37:00.247: INFO: Pod "downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.148684ms
Jan 31 22:37:02.259: INFO: Pod "downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021654356s
Jan 31 22:37:04.269: INFO: Pod "downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03192428s
Jan 31 22:37:06.276: INFO: Pod "downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039571581s
Jan 31 22:37:08.285: INFO: Pod "downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047774996s
Jan 31 22:37:10.292: INFO: Pod "downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055268801s
STEP: Saw pod success
Jan 31 22:37:10.292: INFO: Pod "downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c" satisfied condition "success or failure"
Jan 31 22:37:10.297: INFO: Trying to get logs from node jerma-node pod downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c container dapi-container: 
STEP: delete the pod
Jan 31 22:37:10.588: INFO: Waiting for pod downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c to disappear
Jan 31 22:37:10.617: INFO: Pod downward-api-5ac9cb64-d874-4d8b-9602-6acbaacce37c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:37:10.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2396" for this suite.

• [SLOW TEST:10.524 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3676,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:37:10.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:37:11.477: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:37:13.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:37:15.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:37:17.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:37:19.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107031, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:37:22.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:37:23.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2037" for this suite.
STEP: Destroying namespace "webhook-2037-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.508 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":222,"skipped":3699,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:37:23.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-9230c423-c7de-44b8-bcb8-b0d9317155ed
STEP: Creating configMap with name cm-test-opt-upd-2601c95b-34a9-4947-8b65-f08bd6d99d00
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9230c423-c7de-44b8-bcb8-b0d9317155ed
STEP: Updating configmap cm-test-opt-upd-2601c95b-34a9-4947-8b65-f08bd6d99d00
STEP: Creating configMap with name cm-test-opt-create-98f61d36-40b7-4b4b-817f-eff785cffa8d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:38:44.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2230" for this suite.

• [SLOW TEST:81.341 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3730,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:38:44.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan 31 22:38:44.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:38:59.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7665" for this suite.

• [SLOW TEST:14.927 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":224,"skipped":3736,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:38:59.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 31 22:39:08.162: INFO: Successfully updated pod "annotationupdatea567ddd2-4b36-419d-b49d-91d7bf41e238"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:39:10.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2032" for this suite.

• [SLOW TEST:10.960 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3753,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:39:10.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:39:10.453: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9837212b-a9f3-430a-884c-39c8da9228bc" in namespace "security-context-test-1283" to be "success or failure"
Jan 31 22:39:10.472: INFO: Pod "busybox-privileged-false-9837212b-a9f3-430a-884c-39c8da9228bc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.772555ms
Jan 31 22:39:12.482: INFO: Pod "busybox-privileged-false-9837212b-a9f3-430a-884c-39c8da9228bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028502116s
Jan 31 22:39:14.490: INFO: Pod "busybox-privileged-false-9837212b-a9f3-430a-884c-39c8da9228bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03671322s
Jan 31 22:39:16.499: INFO: Pod "busybox-privileged-false-9837212b-a9f3-430a-884c-39c8da9228bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046348855s
Jan 31 22:39:18.506: INFO: Pod "busybox-privileged-false-9837212b-a9f3-430a-884c-39c8da9228bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052820456s
Jan 31 22:39:18.506: INFO: Pod "busybox-privileged-false-9837212b-a9f3-430a-884c-39c8da9228bc" satisfied condition "success or failure"
Jan 31 22:39:18.526: INFO: Got logs for pod "busybox-privileged-false-9837212b-a9f3-430a-884c-39c8da9228bc": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:39:18.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1283" for this suite.

• [SLOW TEST:8.168 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3763,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:39:18.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:39:30.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3312" for this suite.

• [SLOW TEST:12.255 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3774,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:39:30.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 31 22:39:31.044: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 31 22:39:36.051: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:39:36.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7282" for this suite.

• [SLOW TEST:5.636 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":228,"skipped":3780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:39:36.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-a37cfbfc-22f0-4cea-acad-aa61da15a1f0
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a37cfbfc-22f0-4cea-acad-aa61da15a1f0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:41:18.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-990" for this suite.

• [SLOW TEST:101.681 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3821,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:41:18.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-826b0466-80d6-46c3-9826-157c93e4a74d
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:41:18.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3621" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":230,"skipped":3833,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:41:18.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 31 22:41:18.371: INFO: Number of nodes with available pods: 0
Jan 31 22:41:18.371: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:19.385: INFO: Number of nodes with available pods: 0
Jan 31 22:41:19.385: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:21.119: INFO: Number of nodes with available pods: 0
Jan 31 22:41:21.119: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:21.495: INFO: Number of nodes with available pods: 0
Jan 31 22:41:21.495: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:22.386: INFO: Number of nodes with available pods: 0
Jan 31 22:41:22.386: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:23.408: INFO: Number of nodes with available pods: 0
Jan 31 22:41:23.408: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:24.380: INFO: Number of nodes with available pods: 0
Jan 31 22:41:24.380: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:26.588: INFO: Number of nodes with available pods: 0
Jan 31 22:41:26.588: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:27.869: INFO: Number of nodes with available pods: 0
Jan 31 22:41:27.869: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:28.384: INFO: Number of nodes with available pods: 0
Jan 31 22:41:28.384: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:29.384: INFO: Number of nodes with available pods: 1
Jan 31 22:41:29.384: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 22:41:30.385: INFO: Number of nodes with available pods: 2
Jan 31 22:41:30.385: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 31 22:41:30.464: INFO: Number of nodes with available pods: 1
Jan 31 22:41:30.464: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:31.474: INFO: Number of nodes with available pods: 1
Jan 31 22:41:31.474: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:32.490: INFO: Number of nodes with available pods: 1
Jan 31 22:41:32.490: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:33.477: INFO: Number of nodes with available pods: 1
Jan 31 22:41:33.477: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:34.477: INFO: Number of nodes with available pods: 1
Jan 31 22:41:34.477: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:35.472: INFO: Number of nodes with available pods: 1
Jan 31 22:41:35.472: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:36.477: INFO: Number of nodes with available pods: 1
Jan 31 22:41:36.477: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:37.475: INFO: Number of nodes with available pods: 1
Jan 31 22:41:37.475: INFO: Node jerma-node is running more than one daemon pod
Jan 31 22:41:38.486: INFO: Number of nodes with available pods: 2
Jan 31 22:41:38.486: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1880, will wait for the garbage collector to delete the pods
Jan 31 22:41:38.569: INFO: Deleting DaemonSet.extensions daemon-set took: 20.448397ms
Jan 31 22:41:38.970: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.849481ms
Jan 31 22:41:53.178: INFO: Number of nodes with available pods: 0
Jan 31 22:41:53.178: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 22:41:53.182: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1880/daemonsets","resourceVersion":"5616406"},"items":null}

Jan 31 22:41:53.189: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1880/pods","resourceVersion":"5616406"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:41:53.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1880" for this suite.

• [SLOW TEST:34.997 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":231,"skipped":3837,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:41:53.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:41:53.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 31 22:41:56.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1743 create -f -'
Jan 31 22:41:59.163: INFO: stderr: ""
Jan 31 22:41:59.163: INFO: stdout: "e2e-test-crd-publish-openapi-6471-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 31 22:41:59.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1743 delete e2e-test-crd-publish-openapi-6471-crds test-cr'
Jan 31 22:41:59.322: INFO: stderr: ""
Jan 31 22:41:59.322: INFO: stdout: "e2e-test-crd-publish-openapi-6471-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan 31 22:41:59.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1743 apply -f -'
Jan 31 22:41:59.741: INFO: stderr: ""
Jan 31 22:41:59.741: INFO: stdout: "e2e-test-crd-publish-openapi-6471-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 31 22:41:59.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1743 delete e2e-test-crd-publish-openapi-6471-crds test-cr'
Jan 31 22:41:59.887: INFO: stderr: ""
Jan 31 22:41:59.888: INFO: stdout: "e2e-test-crd-publish-openapi-6471-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 31 22:41:59.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6471-crds'
Jan 31 22:42:00.292: INFO: stderr: ""
Jan 31 22:42:00.292: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6471-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:42:03.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1743" for this suite.

• [SLOW TEST:9.939 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":232,"skipped":3848,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:42:03.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-94p2
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 22:42:03.279: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-94p2" in namespace "subpath-4181" to be "success or failure"
Jan 31 22:42:03.289: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.05083ms
Jan 31 22:42:05.295: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015992959s
Jan 31 22:42:07.301: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022148555s
Jan 31 22:42:09.309: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 6.029572898s
Jan 31 22:42:11.314: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 8.034396046s
Jan 31 22:42:13.320: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 10.040751475s
Jan 31 22:42:15.329: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 12.04997698s
Jan 31 22:42:17.351: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 14.071733381s
Jan 31 22:42:19.360: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 16.081019263s
Jan 31 22:42:21.367: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 18.087511278s
Jan 31 22:42:23.374: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 20.095113215s
Jan 31 22:42:25.382: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 22.103181445s
Jan 31 22:42:27.390: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 24.111032134s
Jan 31 22:42:29.413: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Running", Reason="", readiness=true. Elapsed: 26.133440567s
Jan 31 22:42:31.420: INFO: Pod "pod-subpath-test-secret-94p2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.140487767s
STEP: Saw pod success
Jan 31 22:42:31.420: INFO: Pod "pod-subpath-test-secret-94p2" satisfied condition "success or failure"
Jan 31 22:42:31.424: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-94p2 container test-container-subpath-secret-94p2: 
STEP: delete the pod
Jan 31 22:42:31.558: INFO: Waiting for pod pod-subpath-test-secret-94p2 to disappear
Jan 31 22:42:31.575: INFO: Pod pod-subpath-test-secret-94p2 no longer exists
STEP: Deleting pod pod-subpath-test-secret-94p2
Jan 31 22:42:31.575: INFO: Deleting pod "pod-subpath-test-secret-94p2" in namespace "subpath-4181"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:42:31.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4181" for this suite.

• [SLOW TEST:28.436 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":233,"skipped":3870,"failed":0}
SSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:42:31.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-9151
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9151
STEP: Deleting pre-stop pod
Jan 31 22:42:50.931: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:42:50.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9151" for this suite.

• [SLOW TEST:19.395 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":234,"skipped":3877,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:42:50.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 31 22:43:13.218: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:13.218: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:13.303515       8 log.go:172] (0xc0056342c0) (0xc001393b80) Create stream
I0131 22:43:13.303649       8 log.go:172] (0xc0056342c0) (0xc001393b80) Stream added, broadcasting: 1
I0131 22:43:13.306689       8 log.go:172] (0xc0056342c0) Reply frame received for 1
I0131 22:43:13.306743       8 log.go:172] (0xc0056342c0) (0xc002893e00) Create stream
I0131 22:43:13.306752       8 log.go:172] (0xc0056342c0) (0xc002893e00) Stream added, broadcasting: 3
I0131 22:43:13.308095       8 log.go:172] (0xc0056342c0) Reply frame received for 3
I0131 22:43:13.308155       8 log.go:172] (0xc0056342c0) (0xc002893ea0) Create stream
I0131 22:43:13.308165       8 log.go:172] (0xc0056342c0) (0xc002893ea0) Stream added, broadcasting: 5
I0131 22:43:13.311016       8 log.go:172] (0xc0056342c0) Reply frame received for 5
I0131 22:43:13.407893       8 log.go:172] (0xc0056342c0) Data frame received for 3
I0131 22:43:13.407965       8 log.go:172] (0xc002893e00) (3) Data frame handling
I0131 22:43:13.407997       8 log.go:172] (0xc002893e00) (3) Data frame sent
I0131 22:43:13.488494       8 log.go:172] (0xc0056342c0) (0xc002893e00) Stream removed, broadcasting: 3
I0131 22:43:13.488661       8 log.go:172] (0xc0056342c0) Data frame received for 1
I0131 22:43:13.488716       8 log.go:172] (0xc0056342c0) (0xc002893ea0) Stream removed, broadcasting: 5
I0131 22:43:13.488749       8 log.go:172] (0xc001393b80) (1) Data frame handling
I0131 22:43:13.488777       8 log.go:172] (0xc001393b80) (1) Data frame sent
I0131 22:43:13.488788       8 log.go:172] (0xc0056342c0) (0xc001393b80) Stream removed, broadcasting: 1
I0131 22:43:13.488794       8 log.go:172] (0xc0056342c0) Go away received
I0131 22:43:13.489208       8 log.go:172] (0xc0056342c0) (0xc001393b80) Stream removed, broadcasting: 1
I0131 22:43:13.489243       8 log.go:172] (0xc0056342c0) (0xc002893e00) Stream removed, broadcasting: 3
I0131 22:43:13.489257       8 log.go:172] (0xc0056342c0) (0xc002893ea0) Stream removed, broadcasting: 5
Jan 31 22:43:13.489: INFO: Exec stderr: ""
Jan 31 22:43:13.489: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:13.489: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:13.532380       8 log.go:172] (0xc0056e6370) (0xc000940820) Create stream
I0131 22:43:13.532740       8 log.go:172] (0xc0056e6370) (0xc000940820) Stream added, broadcasting: 1
I0131 22:43:13.535896       8 log.go:172] (0xc0056e6370) Reply frame received for 1
I0131 22:43:13.535970       8 log.go:172] (0xc0056e6370) (0xc001118fa0) Create stream
I0131 22:43:13.535996       8 log.go:172] (0xc0056e6370) (0xc001118fa0) Stream added, broadcasting: 3
I0131 22:43:13.537185       8 log.go:172] (0xc0056e6370) Reply frame received for 3
I0131 22:43:13.537208       8 log.go:172] (0xc0056e6370) (0xc000940960) Create stream
I0131 22:43:13.537222       8 log.go:172] (0xc0056e6370) (0xc000940960) Stream added, broadcasting: 5
I0131 22:43:13.538531       8 log.go:172] (0xc0056e6370) Reply frame received for 5
I0131 22:43:13.631721       8 log.go:172] (0xc0056e6370) Data frame received for 3
I0131 22:43:13.631929       8 log.go:172] (0xc001118fa0) (3) Data frame handling
I0131 22:43:13.632009       8 log.go:172] (0xc001118fa0) (3) Data frame sent
I0131 22:43:13.719443       8 log.go:172] (0xc0056e6370) Data frame received for 1
I0131 22:43:13.719525       8 log.go:172] (0xc000940820) (1) Data frame handling
I0131 22:43:13.719550       8 log.go:172] (0xc000940820) (1) Data frame sent
I0131 22:43:13.720505       8 log.go:172] (0xc0056e6370) (0xc000940960) Stream removed, broadcasting: 5
I0131 22:43:13.720640       8 log.go:172] (0xc0056e6370) (0xc000940820) Stream removed, broadcasting: 1
I0131 22:43:13.720738       8 log.go:172] (0xc0056e6370) (0xc001118fa0) Stream removed, broadcasting: 3
I0131 22:43:13.720836       8 log.go:172] (0xc0056e6370) Go away received
I0131 22:43:13.721135       8 log.go:172] (0xc0056e6370) (0xc000940820) Stream removed, broadcasting: 1
I0131 22:43:13.721147       8 log.go:172] (0xc0056e6370) (0xc001118fa0) Stream removed, broadcasting: 3
I0131 22:43:13.721159       8 log.go:172] (0xc0056e6370) (0xc000940960) Stream removed, broadcasting: 5
Jan 31 22:43:13.721: INFO: Exec stderr: ""
Jan 31 22:43:13.721: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:13.721: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:13.777916       8 log.go:172] (0xc0050e8a50) (0xc001119680) Create stream
I0131 22:43:13.778098       8 log.go:172] (0xc0050e8a50) (0xc001119680) Stream added, broadcasting: 1
I0131 22:43:13.787012       8 log.go:172] (0xc0050e8a50) Reply frame received for 1
I0131 22:43:13.787112       8 log.go:172] (0xc0050e8a50) (0xc001d7df40) Create stream
I0131 22:43:13.787127       8 log.go:172] (0xc0050e8a50) (0xc001d7df40) Stream added, broadcasting: 3
I0131 22:43:13.789513       8 log.go:172] (0xc0050e8a50) Reply frame received for 3
I0131 22:43:13.789548       8 log.go:172] (0xc0050e8a50) (0xc001393e00) Create stream
I0131 22:43:13.789559       8 log.go:172] (0xc0050e8a50) (0xc001393e00) Stream added, broadcasting: 5
I0131 22:43:13.801694       8 log.go:172] (0xc0050e8a50) Reply frame received for 5
I0131 22:43:13.902175       8 log.go:172] (0xc0050e8a50) Data frame received for 3
I0131 22:43:13.902400       8 log.go:172] (0xc001d7df40) (3) Data frame handling
I0131 22:43:13.902446       8 log.go:172] (0xc001d7df40) (3) Data frame sent
I0131 22:43:14.056155       8 log.go:172] (0xc0050e8a50) (0xc001d7df40) Stream removed, broadcasting: 3
I0131 22:43:14.057382       8 log.go:172] (0xc0050e8a50) Data frame received for 1
I0131 22:43:14.057738       8 log.go:172] (0xc0050e8a50) (0xc001393e00) Stream removed, broadcasting: 5
I0131 22:43:14.057884       8 log.go:172] (0xc001119680) (1) Data frame handling
I0131 22:43:14.057965       8 log.go:172] (0xc001119680) (1) Data frame sent
I0131 22:43:14.057993       8 log.go:172] (0xc0050e8a50) (0xc001119680) Stream removed, broadcasting: 1
I0131 22:43:14.058052       8 log.go:172] (0xc0050e8a50) Go away received
I0131 22:43:14.059311       8 log.go:172] (0xc0050e8a50) (0xc001119680) Stream removed, broadcasting: 1
I0131 22:43:14.059499       8 log.go:172] (0xc0050e8a50) (0xc001d7df40) Stream removed, broadcasting: 3
I0131 22:43:14.059571       8 log.go:172] (0xc0050e8a50) (0xc001393e00) Stream removed, broadcasting: 5
Jan 31 22:43:14.059: INFO: Exec stderr: ""
Jan 31 22:43:14.059: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:14.060: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:14.150173       8 log.go:172] (0xc004cf1290) (0xc001d083c0) Create stream
I0131 22:43:14.150369       8 log.go:172] (0xc004cf1290) (0xc001d083c0) Stream added, broadcasting: 1
I0131 22:43:14.157045       8 log.go:172] (0xc004cf1290) Reply frame received for 1
I0131 22:43:14.157133       8 log.go:172] (0xc004cf1290) (0xc000940d20) Create stream
I0131 22:43:14.157166       8 log.go:172] (0xc004cf1290) (0xc000940d20) Stream added, broadcasting: 3
I0131 22:43:14.161039       8 log.go:172] (0xc004cf1290) Reply frame received for 3
I0131 22:43:14.161263       8 log.go:172] (0xc004cf1290) (0xc001119b80) Create stream
I0131 22:43:14.161299       8 log.go:172] (0xc004cf1290) (0xc001119b80) Stream added, broadcasting: 5
I0131 22:43:14.164859       8 log.go:172] (0xc004cf1290) Reply frame received for 5
I0131 22:43:14.277962       8 log.go:172] (0xc004cf1290) Data frame received for 3
I0131 22:43:14.278103       8 log.go:172] (0xc000940d20) (3) Data frame handling
I0131 22:43:14.278133       8 log.go:172] (0xc000940d20) (3) Data frame sent
I0131 22:43:14.366431       8 log.go:172] (0xc004cf1290) (0xc000940d20) Stream removed, broadcasting: 3
I0131 22:43:14.366584       8 log.go:172] (0xc004cf1290) Data frame received for 1
I0131 22:43:14.366614       8 log.go:172] (0xc001d083c0) (1) Data frame handling
I0131 22:43:14.366635       8 log.go:172] (0xc001d083c0) (1) Data frame sent
I0131 22:43:14.366744       8 log.go:172] (0xc004cf1290) (0xc001d083c0) Stream removed, broadcasting: 1
I0131 22:43:14.366783       8 log.go:172] (0xc004cf1290) (0xc001119b80) Stream removed, broadcasting: 5
I0131 22:43:14.366808       8 log.go:172] (0xc004cf1290) Go away received
I0131 22:43:14.367059       8 log.go:172] (0xc004cf1290) (0xc001d083c0) Stream removed, broadcasting: 1
I0131 22:43:14.367095       8 log.go:172] (0xc004cf1290) (0xc000940d20) Stream removed, broadcasting: 3
I0131 22:43:14.367138       8 log.go:172] (0xc004cf1290) (0xc001119b80) Stream removed, broadcasting: 5
Jan 31 22:43:14.367: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 31 22:43:14.367: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:14.367: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:14.404100       8 log.go:172] (0xc004cf18c0) (0xc001d08640) Create stream
I0131 22:43:14.404236       8 log.go:172] (0xc004cf18c0) (0xc001d08640) Stream added, broadcasting: 1
I0131 22:43:14.408303       8 log.go:172] (0xc004cf18c0) Reply frame received for 1
I0131 22:43:14.408384       8 log.go:172] (0xc004cf18c0) (0xc0011be140) Create stream
I0131 22:43:14.408402       8 log.go:172] (0xc004cf18c0) (0xc0011be140) Stream added, broadcasting: 3
I0131 22:43:14.409478       8 log.go:172] (0xc004cf18c0) Reply frame received for 3
I0131 22:43:14.409495       8 log.go:172] (0xc004cf18c0) (0xc001d08780) Create stream
I0131 22:43:14.409502       8 log.go:172] (0xc004cf18c0) (0xc001d08780) Stream added, broadcasting: 5
I0131 22:43:14.410270       8 log.go:172] (0xc004cf18c0) Reply frame received for 5
I0131 22:43:14.474318       8 log.go:172] (0xc004cf18c0) Data frame received for 3
I0131 22:43:14.474508       8 log.go:172] (0xc0011be140) (3) Data frame handling
I0131 22:43:14.474600       8 log.go:172] (0xc0011be140) (3) Data frame sent
I0131 22:43:14.582435       8 log.go:172] (0xc004cf18c0) (0xc0011be140) Stream removed, broadcasting: 3
I0131 22:43:14.583300       8 log.go:172] (0xc004cf18c0) Data frame received for 1
I0131 22:43:14.583354       8 log.go:172] (0xc004cf18c0) (0xc001d08780) Stream removed, broadcasting: 5
I0131 22:43:14.583383       8 log.go:172] (0xc001d08640) (1) Data frame handling
I0131 22:43:14.583410       8 log.go:172] (0xc001d08640) (1) Data frame sent
I0131 22:43:14.583433       8 log.go:172] (0xc004cf18c0) (0xc001d08640) Stream removed, broadcasting: 1
I0131 22:43:14.583455       8 log.go:172] (0xc004cf18c0) Go away received
I0131 22:43:14.584193       8 log.go:172] (0xc004cf18c0) (0xc001d08640) Stream removed, broadcasting: 1
I0131 22:43:14.584224       8 log.go:172] (0xc004cf18c0) (0xc0011be140) Stream removed, broadcasting: 3
I0131 22:43:14.584231       8 log.go:172] (0xc004cf18c0) (0xc001d08780) Stream removed, broadcasting: 5
Jan 31 22:43:14.584: INFO: Exec stderr: ""
Jan 31 22:43:14.584: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:14.584: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:14.646814       8 log.go:172] (0xc004cf1ef0) (0xc001d08960) Create stream
I0131 22:43:14.647148       8 log.go:172] (0xc004cf1ef0) (0xc001d08960) Stream added, broadcasting: 1
I0131 22:43:14.655790       8 log.go:172] (0xc004cf1ef0) Reply frame received for 1
I0131 22:43:14.656024       8 log.go:172] (0xc004cf1ef0) (0xc001393ea0) Create stream
I0131 22:43:14.656068       8 log.go:172] (0xc004cf1ef0) (0xc001393ea0) Stream added, broadcasting: 3
I0131 22:43:14.660765       8 log.go:172] (0xc004cf1ef0) Reply frame received for 3
I0131 22:43:14.660826       8 log.go:172] (0xc004cf1ef0) (0xc0011be8c0) Create stream
I0131 22:43:14.660837       8 log.go:172] (0xc004cf1ef0) (0xc0011be8c0) Stream added, broadcasting: 5
I0131 22:43:14.663132       8 log.go:172] (0xc004cf1ef0) Reply frame received for 5
I0131 22:43:14.747235       8 log.go:172] (0xc004cf1ef0) Data frame received for 3
I0131 22:43:14.747292       8 log.go:172] (0xc001393ea0) (3) Data frame handling
I0131 22:43:14.747304       8 log.go:172] (0xc001393ea0) (3) Data frame sent
I0131 22:43:14.801406       8 log.go:172] (0xc004cf1ef0) (0xc001393ea0) Stream removed, broadcasting: 3
I0131 22:43:14.801528       8 log.go:172] (0xc004cf1ef0) Data frame received for 1
I0131 22:43:14.801597       8 log.go:172] (0xc004cf1ef0) (0xc0011be8c0) Stream removed, broadcasting: 5
I0131 22:43:14.801641       8 log.go:172] (0xc001d08960) (1) Data frame handling
I0131 22:43:14.801654       8 log.go:172] (0xc001d08960) (1) Data frame sent
I0131 22:43:14.801663       8 log.go:172] (0xc004cf1ef0) (0xc001d08960) Stream removed, broadcasting: 1
I0131 22:43:14.801691       8 log.go:172] (0xc004cf1ef0) Go away received
I0131 22:43:14.801828       8 log.go:172] (0xc004cf1ef0) (0xc001d08960) Stream removed, broadcasting: 1
I0131 22:43:14.801839       8 log.go:172] (0xc004cf1ef0) (0xc001393ea0) Stream removed, broadcasting: 3
I0131 22:43:14.801857       8 log.go:172] (0xc004cf1ef0) (0xc0011be8c0) Stream removed, broadcasting: 5
Jan 31 22:43:14.801: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 31 22:43:14.801: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:14.801: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:14.831141       8 log.go:172] (0xc005b26580) (0xc001d08e60) Create stream
I0131 22:43:14.831208       8 log.go:172] (0xc005b26580) (0xc001d08e60) Stream added, broadcasting: 1
I0131 22:43:14.833580       8 log.go:172] (0xc005b26580) Reply frame received for 1
I0131 22:43:14.833605       8 log.go:172] (0xc005b26580) (0xc001119d60) Create stream
I0131 22:43:14.833611       8 log.go:172] (0xc005b26580) (0xc001119d60) Stream added, broadcasting: 3
I0131 22:43:14.835162       8 log.go:172] (0xc005b26580) Reply frame received for 3
I0131 22:43:14.835186       8 log.go:172] (0xc005b26580) (0xc0016d6640) Create stream
I0131 22:43:14.835200       8 log.go:172] (0xc005b26580) (0xc0016d6640) Stream added, broadcasting: 5
I0131 22:43:14.836108       8 log.go:172] (0xc005b26580) Reply frame received for 5
I0131 22:43:14.912001       8 log.go:172] (0xc005b26580) Data frame received for 3
I0131 22:43:14.912160       8 log.go:172] (0xc001119d60) (3) Data frame handling
I0131 22:43:14.912193       8 log.go:172] (0xc001119d60) (3) Data frame sent
I0131 22:43:14.980861       8 log.go:172] (0xc005b26580) (0xc001119d60) Stream removed, broadcasting: 3
I0131 22:43:14.981015       8 log.go:172] (0xc005b26580) Data frame received for 1
I0131 22:43:14.981027       8 log.go:172] (0xc001d08e60) (1) Data frame handling
I0131 22:43:14.981039       8 log.go:172] (0xc001d08e60) (1) Data frame sent
I0131 22:43:14.981046       8 log.go:172] (0xc005b26580) (0xc001d08e60) Stream removed, broadcasting: 1
I0131 22:43:14.981198       8 log.go:172] (0xc005b26580) (0xc0016d6640) Stream removed, broadcasting: 5
I0131 22:43:14.981233       8 log.go:172] (0xc005b26580) (0xc001d08e60) Stream removed, broadcasting: 1
I0131 22:43:14.981240       8 log.go:172] (0xc005b26580) (0xc001119d60) Stream removed, broadcasting: 3
I0131 22:43:14.981246       8 log.go:172] (0xc005b26580) (0xc0016d6640) Stream removed, broadcasting: 5
Jan 31 22:43:14.981: INFO: Exec stderr: ""
Jan 31 22:43:14.981: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:14.981: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:14.982926       8 log.go:172] (0xc005b26580) Go away received
I0131 22:43:15.019491       8 log.go:172] (0xc0056348f0) (0xc0013848c0) Create stream
I0131 22:43:15.019549       8 log.go:172] (0xc0056348f0) (0xc0013848c0) Stream added, broadcasting: 1
I0131 22:43:15.022670       8 log.go:172] (0xc0056348f0) Reply frame received for 1
I0131 22:43:15.022696       8 log.go:172] (0xc0056348f0) (0xc001d09040) Create stream
I0131 22:43:15.022703       8 log.go:172] (0xc0056348f0) (0xc001d09040) Stream added, broadcasting: 3
I0131 22:43:15.023407       8 log.go:172] (0xc0056348f0) Reply frame received for 3
I0131 22:43:15.023432       8 log.go:172] (0xc0056348f0) (0xc000941040) Create stream
I0131 22:43:15.023444       8 log.go:172] (0xc0056348f0) (0xc000941040) Stream added, broadcasting: 5
I0131 22:43:15.024284       8 log.go:172] (0xc0056348f0) Reply frame received for 5
I0131 22:43:15.081329       8 log.go:172] (0xc0056348f0) Data frame received for 3
I0131 22:43:15.081380       8 log.go:172] (0xc001d09040) (3) Data frame handling
I0131 22:43:15.081392       8 log.go:172] (0xc001d09040) (3) Data frame sent
I0131 22:43:15.139241       8 log.go:172] (0xc0056348f0) Data frame received for 1
I0131 22:43:15.139422       8 log.go:172] (0xc0013848c0) (1) Data frame handling
I0131 22:43:15.139461       8 log.go:172] (0xc0013848c0) (1) Data frame sent
I0131 22:43:15.139954       8 log.go:172] (0xc0056348f0) (0xc0013848c0) Stream removed, broadcasting: 1
I0131 22:43:15.140673       8 log.go:172] (0xc0056348f0) (0xc001d09040) Stream removed, broadcasting: 3
I0131 22:43:15.141191       8 log.go:172] (0xc0056348f0) (0xc000941040) Stream removed, broadcasting: 5
I0131 22:43:15.141229       8 log.go:172] (0xc0056348f0) (0xc0013848c0) Stream removed, broadcasting: 1
I0131 22:43:15.141253       8 log.go:172] (0xc0056348f0) (0xc001d09040) Stream removed, broadcasting: 3
I0131 22:43:15.141265       8 log.go:172] (0xc0056348f0) (0xc000941040) Stream removed, broadcasting: 5
Jan 31 22:43:15.141: INFO: Exec stderr: ""
Jan 31 22:43:15.141: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:15.141: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:15.183115       8 log.go:172] (0xc005dfa2c0) (0xc0011bfea0) Create stream
I0131 22:43:15.183244       8 log.go:172] (0xc005dfa2c0) (0xc0011bfea0) Stream added, broadcasting: 1
I0131 22:43:15.188488       8 log.go:172] (0xc005dfa2c0) Reply frame received for 1
I0131 22:43:15.188580       8 log.go:172] (0xc005dfa2c0) (0xc0009414a0) Create stream
I0131 22:43:15.188595       8 log.go:172] (0xc005dfa2c0) (0xc0009414a0) Stream added, broadcasting: 3
I0131 22:43:15.190665       8 log.go:172] (0xc005dfa2c0) Reply frame received for 3
I0131 22:43:15.190712       8 log.go:172] (0xc005dfa2c0) (0xc0016d6780) Create stream
I0131 22:43:15.190722       8 log.go:172] (0xc005dfa2c0) (0xc0016d6780) Stream added, broadcasting: 5
I0131 22:43:15.192424       8 log.go:172] (0xc005dfa2c0) Reply frame received for 5
I0131 22:43:15.268101       8 log.go:172] (0xc005dfa2c0) Data frame received for 3
I0131 22:43:15.268202       8 log.go:172] (0xc0009414a0) (3) Data frame handling
I0131 22:43:15.268230       8 log.go:172] (0xc0009414a0) (3) Data frame sent
I0131 22:43:15.343128       8 log.go:172] (0xc005dfa2c0) Data frame received for 1
I0131 22:43:15.343199       8 log.go:172] (0xc005dfa2c0) (0xc0009414a0) Stream removed, broadcasting: 3
I0131 22:43:15.343238       8 log.go:172] (0xc0011bfea0) (1) Data frame handling
I0131 22:43:15.343255       8 log.go:172] (0xc0011bfea0) (1) Data frame sent
I0131 22:43:15.343276       8 log.go:172] (0xc005dfa2c0) (0xc0016d6780) Stream removed, broadcasting: 5
I0131 22:43:15.343307       8 log.go:172] (0xc005dfa2c0) (0xc0011bfea0) Stream removed, broadcasting: 1
I0131 22:43:15.343329       8 log.go:172] (0xc005dfa2c0) Go away received
I0131 22:43:15.343504       8 log.go:172] (0xc005dfa2c0) (0xc0011bfea0) Stream removed, broadcasting: 1
I0131 22:43:15.343516       8 log.go:172] (0xc005dfa2c0) (0xc0009414a0) Stream removed, broadcasting: 3
I0131 22:43:15.343530       8 log.go:172] (0xc005dfa2c0) (0xc0016d6780) Stream removed, broadcasting: 5
Jan 31 22:43:15.343: INFO: Exec stderr: ""
Jan 31 22:43:15.343: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6841 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:43:15.343: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:43:15.380527       8 log.go:172] (0xc005dfa8f0) (0xc00086eaa0) Create stream
I0131 22:43:15.380790       8 log.go:172] (0xc005dfa8f0) (0xc00086eaa0) Stream added, broadcasting: 1
I0131 22:43:15.386854       8 log.go:172] (0xc005dfa8f0) Reply frame received for 1
I0131 22:43:15.386904       8 log.go:172] (0xc005dfa8f0) (0xc000941680) Create stream
I0131 22:43:15.386922       8 log.go:172] (0xc005dfa8f0) (0xc000941680) Stream added, broadcasting: 3
I0131 22:43:15.391194       8 log.go:172] (0xc005dfa8f0) Reply frame received for 3
I0131 22:43:15.391285       8 log.go:172] (0xc005dfa8f0) (0xc0016d68c0) Create stream
I0131 22:43:15.391315       8 log.go:172] (0xc005dfa8f0) (0xc0016d68c0) Stream added, broadcasting: 5
I0131 22:43:15.392482       8 log.go:172] (0xc005dfa8f0) Reply frame received for 5
I0131 22:43:15.458280       8 log.go:172] (0xc005dfa8f0) Data frame received for 3
I0131 22:43:15.458333       8 log.go:172] (0xc000941680) (3) Data frame handling
I0131 22:43:15.458346       8 log.go:172] (0xc000941680) (3) Data frame sent
I0131 22:43:15.516467       8 log.go:172] (0xc005dfa8f0) (0xc000941680) Stream removed, broadcasting: 3
I0131 22:43:15.516714       8 log.go:172] (0xc005dfa8f0) Data frame received for 1
I0131 22:43:15.516731       8 log.go:172] (0xc00086eaa0) (1) Data frame handling
I0131 22:43:15.516745       8 log.go:172] (0xc00086eaa0) (1) Data frame sent
I0131 22:43:15.516755       8 log.go:172] (0xc005dfa8f0) (0xc00086eaa0) Stream removed, broadcasting: 1
I0131 22:43:15.516795       8 log.go:172] (0xc005dfa8f0) (0xc0016d68c0) Stream removed, broadcasting: 5
I0131 22:43:15.516842       8 log.go:172] (0xc005dfa8f0) Go away received
I0131 22:43:15.517115       8 log.go:172] (0xc005dfa8f0) (0xc00086eaa0) Stream removed, broadcasting: 1
I0131 22:43:15.517132       8 log.go:172] (0xc005dfa8f0) (0xc000941680) Stream removed, broadcasting: 3
I0131 22:43:15.517139       8 log.go:172] (0xc005dfa8f0) (0xc0016d68c0) Stream removed, broadcasting: 5
Jan 31 22:43:15.517: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:43:15.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6841" for this suite.

• [SLOW TEST:24.528 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3917,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:43:15.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:43:47.650: INFO: Container started at 2020-01-31 22:43:23 +0000 UTC, pod became ready at 2020-01-31 22:43:46 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:43:47.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2455" for this suite.

• [SLOW TEST:32.138 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3941,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:43:47.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 31 22:43:47.771: INFO: Waiting up to 5m0s for pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911" in namespace "downward-api-317" to be "success or failure"
Jan 31 22:43:47.778: INFO: Pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550196ms
Jan 31 22:43:49.792: INFO: Pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020571312s
Jan 31 22:43:51.801: INFO: Pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029691408s
Jan 31 22:43:54.113: INFO: Pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342086267s
Jan 31 22:43:56.688: INFO: Pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911": Phase="Pending", Reason="", readiness=false. Elapsed: 8.916733315s
Jan 31 22:43:58.696: INFO: Pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911": Phase="Pending", Reason="", readiness=false. Elapsed: 10.92439205s
Jan 31 22:44:00.702: INFO: Pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.931238586s
STEP: Saw pod success
Jan 31 22:44:00.703: INFO: Pod "downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911" satisfied condition "success or failure"
Jan 31 22:44:00.877: INFO: Trying to get logs from node jerma-node pod downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911 container dapi-container: 
STEP: delete the pod
Jan 31 22:44:01.057: INFO: Waiting for pod downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911 to disappear
Jan 31 22:44:01.082: INFO: Pod downward-api-97f028c9-e548-417e-bbfe-0b1bab02e911 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:44:01.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-317" for this suite.

• [SLOW TEST:13.439 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3961,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:44:01.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:44:09.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6076" for this suite.

• [SLOW TEST:8.375 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3971,"failed":0}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:44:09.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 31 22:44:09.597: INFO: Waiting up to 5m0s for pod "downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f" in namespace "downward-api-1698" to be "success or failure"
Jan 31 22:44:09.608: INFO: Pod "downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.980372ms
Jan 31 22:44:11.616: INFO: Pod "downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019459281s
Jan 31 22:44:13.623: INFO: Pod "downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02667693s
Jan 31 22:44:15.651: INFO: Pod "downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054486024s
Jan 31 22:44:17.659: INFO: Pod "downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062399485s
STEP: Saw pod success
Jan 31 22:44:17.659: INFO: Pod "downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f" satisfied condition "success or failure"
Jan 31 22:44:17.694: INFO: Trying to get logs from node jerma-node pod downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f container dapi-container: 
STEP: delete the pod
Jan 31 22:44:18.056: INFO: Waiting for pod downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f to disappear
Jan 31 22:44:18.060: INFO: Pod downward-api-842f0501-f2aa-432a-b9ea-a3c05e6a2f8f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:44:18.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1698" for this suite.

• [SLOW TEST:8.592 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3977,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:44:18.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Jan 31 22:44:18.226: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan 31 22:44:18.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6790'
Jan 31 22:44:18.792: INFO: stderr: ""
Jan 31 22:44:18.792: INFO: stdout: "service/agnhost-slave created\n"
Jan 31 22:44:18.792: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan 31 22:44:18.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6790'
Jan 31 22:44:19.206: INFO: stderr: ""
Jan 31 22:44:19.206: INFO: stdout: "service/agnhost-master created\n"
Jan 31 22:44:19.207: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 31 22:44:19.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6790'
Jan 31 22:44:19.676: INFO: stderr: ""
Jan 31 22:44:19.676: INFO: stdout: "service/frontend created\n"
Jan 31 22:44:19.677: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan 31 22:44:19.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6790'
Jan 31 22:44:20.029: INFO: stderr: ""
Jan 31 22:44:20.029: INFO: stdout: "deployment.apps/frontend created\n"
Jan 31 22:44:20.030: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 31 22:44:20.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6790'
Jan 31 22:44:20.541: INFO: stderr: ""
Jan 31 22:44:20.542: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan 31 22:44:20.542: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 31 22:44:20.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6790'
Jan 31 22:44:22.244: INFO: stderr: ""
Jan 31 22:44:22.244: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 31 22:44:22.244: INFO: Waiting for all frontend pods to be Running.
Jan 31 22:44:42.297: INFO: Waiting for frontend to serve content.
Jan 31 22:44:42.333: INFO: Trying to add a new entry to the guestbook.
Jan 31 22:44:42.358: INFO: Verifying that added entry can be retrieved.
Jan 31 22:44:42.367: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jan 31 22:44:47.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6790'
Jan 31 22:44:47.667: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:44:47.667: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 22:44:47.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6790'
Jan 31 22:44:47.893: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:44:47.893: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 22:44:47.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6790'
Jan 31 22:44:48.165: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:44:48.165: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 22:44:48.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6790'
Jan 31 22:44:48.283: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:44:48.283: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 22:44:48.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6790'
Jan 31 22:44:48.439: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:44:48.439: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 22:44:48.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6790'
Jan 31 22:44:48.576: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 22:44:48.576: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:44:48.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6790" for this suite.

• [SLOW TEST:30.626 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":240,"skipped":3987,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:44:48.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:44:48.936: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 11.686223ms)
Jan 31 22:44:48.942: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.150164ms)
Jan 31 22:44:48.949: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.192328ms)
Jan 31 22:44:49.001: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 51.99329ms)
Jan 31 22:44:50.476: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 1.474486759s)
Jan 31 22:44:50.700: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 223.433181ms)
Jan 31 22:44:50.734: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 33.979081ms)
Jan 31 22:44:50.837: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 102.268737ms)
Jan 31 22:44:50.881: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 43.609706ms)
Jan 31 22:44:50.886: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.170716ms)
Jan 31 22:44:50.950: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 63.918654ms)
Jan 31 22:44:50.980: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 30.018487ms)
Jan 31 22:44:50.987: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.259196ms)
Jan 31 22:44:50.996: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.264357ms)
Jan 31 22:44:51.001: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.782219ms)
Jan 31 22:44:51.005: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.765122ms)
Jan 31 22:44:51.010: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.98488ms)
Jan 31 22:44:51.015: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.955853ms)
Jan 31 22:44:51.024: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.81299ms)
Jan 31 22:44:51.046: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 21.776217ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:44:51.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6156" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":241,"skipped":3997,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:44:51.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jan 31 22:44:51.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9223 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 31 22:45:04.713: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0131 22:45:03.495198    3552 log.go:172] (0xc000ae86e0) (0xc0006d1ae0) Create stream\nI0131 22:45:03.495403    3552 log.go:172] (0xc000ae86e0) (0xc0006d1ae0) Stream added, broadcasting: 1\nI0131 22:45:03.498934    3552 log.go:172] (0xc000ae86e0) Reply frame received for 1\nI0131 22:45:03.499004    3552 log.go:172] (0xc000ae86e0) (0xc00059a000) Create stream\nI0131 22:45:03.499027    3552 log.go:172] (0xc000ae86e0) (0xc00059a000) Stream added, broadcasting: 3\nI0131 22:45:03.500392    3552 log.go:172] (0xc000ae86e0) Reply frame received for 3\nI0131 22:45:03.500437    3552 log.go:172] (0xc000ae86e0) (0xc0006d1b80) Create stream\nI0131 22:45:03.500456    3552 log.go:172] (0xc000ae86e0) (0xc0006d1b80) Stream added, broadcasting: 5\nI0131 22:45:03.501708    3552 log.go:172] (0xc000ae86e0) Reply frame received for 5\nI0131 22:45:03.501742    3552 log.go:172] (0xc000ae86e0) (0xc000afc140) Create stream\nI0131 22:45:03.501754    3552 log.go:172] (0xc000ae86e0) (0xc000afc140) Stream added, broadcasting: 7\nI0131 22:45:03.503349    3552 log.go:172] (0xc000ae86e0) Reply frame received for 7\nI0131 22:45:03.503697    3552 log.go:172] (0xc00059a000) (3) Writing data frame\nI0131 22:45:03.503853    3552 log.go:172] (0xc00059a000) (3) Writing data frame\nI0131 22:45:03.508418    3552 log.go:172] (0xc000ae86e0) Data frame received for 5\nI0131 22:45:03.508517    3552 log.go:172] (0xc0006d1b80) (5) Data frame handling\nI0131 22:45:03.508542    3552 log.go:172] (0xc0006d1b80) (5) Data frame sent\nI0131 22:45:03.510275    3552 log.go:172] (0xc000ae86e0) Data frame received for 5\nI0131 22:45:03.510291    3552 log.go:172] (0xc0006d1b80) (5) Data frame handling\nI0131 22:45:03.510307    3552 log.go:172] (0xc0006d1b80) (5) Data frame sent\nI0131 22:45:04.567824    3552 log.go:172] (0xc000ae86e0) Data frame received for 1\nI0131 22:45:04.568023    3552 log.go:172] (0xc000ae86e0) (0xc00059a000) Stream removed, broadcasting: 3\nI0131 22:45:04.568152    3552 log.go:172] (0xc0006d1ae0) (1) Data frame handling\nI0131 22:45:04.568203    3552 log.go:172] (0xc0006d1ae0) (1) Data frame sent\nI0131 22:45:04.568366    3552 log.go:172] (0xc000ae86e0) (0xc000afc140) Stream removed, broadcasting: 7\nI0131 22:45:04.568537    3552 log.go:172] (0xc000ae86e0) (0xc0006d1b80) Stream removed, broadcasting: 5\nI0131 22:45:04.568681    3552 log.go:172] (0xc000ae86e0) (0xc0006d1ae0) Stream removed, broadcasting: 1\nI0131 22:45:04.568754    3552 log.go:172] (0xc000ae86e0) Go away received\nI0131 22:45:04.570119    3552 log.go:172] (0xc000ae86e0) (0xc0006d1ae0) Stream removed, broadcasting: 1\nI0131 22:45:04.570162    3552 log.go:172] (0xc000ae86e0) (0xc00059a000) Stream removed, broadcasting: 3\nI0131 22:45:04.570183    3552 log.go:172] (0xc000ae86e0) (0xc0006d1b80) Stream removed, broadcasting: 5\nI0131 22:45:04.570198    3552 log.go:172] (0xc000ae86e0) (0xc000afc140) Stream removed, broadcasting: 7\n"
Jan 31 22:45:04.714: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:06.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9223" for this suite.

• [SLOW TEST:15.715 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":242,"skipped":4000,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:06.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:45:06.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:15.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9794" for this suite.

• [SLOW TEST:8.241 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4030,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:15.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jan 31 22:45:15.166: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix281048318/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:15.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2152" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":244,"skipped":4047,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:15.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-3c66c0a7-f883-4578-8ac9-c8cda9907d63
STEP: Creating a pod to test consume secrets
Jan 31 22:45:15.415: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f" in namespace "projected-897" to be "success or failure"
Jan 31 22:45:15.424: INFO: Pod "pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.947575ms
Jan 31 22:45:17.429: INFO: Pod "pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014581318s
Jan 31 22:45:19.436: INFO: Pod "pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02164662s
Jan 31 22:45:21.443: INFO: Pod "pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028347277s
Jan 31 22:45:23.449: INFO: Pod "pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034604499s
STEP: Saw pod success
Jan 31 22:45:23.449: INFO: Pod "pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f" satisfied condition "success or failure"
Jan 31 22:45:23.457: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 22:45:23.499: INFO: Waiting for pod pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f to disappear
Jan 31 22:45:23.505: INFO: Pod pod-projected-secrets-6a6ce7c5-629b-41e4-b175-cf74f7c0606f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:23.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-897" for this suite.

• [SLOW TEST:8.286 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4058,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:23.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-180e1f5f-6174-4bfa-90b5-37eda49fac9d
STEP: Creating a pod to test consume configMaps
Jan 31 22:45:23.704: INFO: Waiting up to 5m0s for pod "pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c" in namespace "configmap-8839" to be "success or failure"
Jan 31 22:45:23.716: INFO: Pod "pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.868069ms
Jan 31 22:45:26.622: INFO: Pod "pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.917962152s
Jan 31 22:45:28.634: INFO: Pod "pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.930283516s
Jan 31 22:45:30.647: INFO: Pod "pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.943259237s
Jan 31 22:45:32.653: INFO: Pod "pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.949793964s
STEP: Saw pod success
Jan 31 22:45:32.654: INFO: Pod "pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c" satisfied condition "success or failure"
Jan 31 22:45:32.658: INFO: Trying to get logs from node jerma-node pod pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c container configmap-volume-test: 
STEP: delete the pod
Jan 31 22:45:32.685: INFO: Waiting for pod pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c to disappear
Jan 31 22:45:32.694: INFO: Pod pod-configmaps-6022b870-9073-4ee0-af75-d38da897629c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:32.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8839" for this suite.

• [SLOW TEST:9.140 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4068,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:32.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:32.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5063" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":247,"skipped":4094,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:32.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 22:45:33.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4325'
Jan 31 22:45:33.163: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 22:45:33.163: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Jan 31 22:45:35.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4325'
Jan 31 22:45:35.405: INFO: stderr: ""
Jan 31 22:45:35.405: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:35.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4325" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":248,"skipped":4098,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:35.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 22:45:35.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8593'
Jan 31 22:45:35.643: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 22:45:35.643: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773
Jan 31 22:45:35.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-8593'
Jan 31 22:45:35.880: INFO: stderr: ""
Jan 31 22:45:35.881: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:35.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8593" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":249,"skipped":4108,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:36.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:45:52.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6538" for this suite.

• [SLOW TEST:16.362 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":250,"skipped":4114,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:45:52.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4695
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 22:45:52.660: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 22:46:29.019: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4695 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:46:29.019: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:46:29.080528       8 log.go:172] (0xc002556000) (0xc001980500) Create stream
I0131 22:46:29.080799       8 log.go:172] (0xc002556000) (0xc001980500) Stream added, broadcasting: 1
I0131 22:46:29.085178       8 log.go:172] (0xc002556000) Reply frame received for 1
I0131 22:46:29.085230       8 log.go:172] (0xc002556000) (0xc00190f360) Create stream
I0131 22:46:29.085249       8 log.go:172] (0xc002556000) (0xc00190f360) Stream added, broadcasting: 3
I0131 22:46:29.090000       8 log.go:172] (0xc002556000) Reply frame received for 3
I0131 22:46:29.090035       8 log.go:172] (0xc002556000) (0xc000dbd360) Create stream
I0131 22:46:29.090052       8 log.go:172] (0xc002556000) (0xc000dbd360) Stream added, broadcasting: 5
I0131 22:46:29.091708       8 log.go:172] (0xc002556000) Reply frame received for 5
I0131 22:46:29.186454       8 log.go:172] (0xc002556000) Data frame received for 3
I0131 22:46:29.186707       8 log.go:172] (0xc00190f360) (3) Data frame handling
I0131 22:46:29.186772       8 log.go:172] (0xc00190f360) (3) Data frame sent
I0131 22:46:29.265949       8 log.go:172] (0xc002556000) (0xc000dbd360) Stream removed, broadcasting: 5
I0131 22:46:29.266054       8 log.go:172] (0xc002556000) Data frame received for 1
I0131 22:46:29.266083       8 log.go:172] (0xc002556000) (0xc00190f360) Stream removed, broadcasting: 3
I0131 22:46:29.266130       8 log.go:172] (0xc001980500) (1) Data frame handling
I0131 22:46:29.266156       8 log.go:172] (0xc001980500) (1) Data frame sent
I0131 22:46:29.266174       8 log.go:172] (0xc002556000) (0xc001980500) Stream removed, broadcasting: 1
I0131 22:46:29.266194       8 log.go:172] (0xc002556000) Go away received
I0131 22:46:29.266713       8 log.go:172] (0xc002556000) (0xc001980500) Stream removed, broadcasting: 1
I0131 22:46:29.266761       8 log.go:172] (0xc002556000) (0xc00190f360) Stream removed, broadcasting: 3
I0131 22:46:29.266784       8 log.go:172] (0xc002556000) (0xc000dbd360) Stream removed, broadcasting: 5
Jan 31 22:46:29.266: INFO: Waiting for responses: map[]
Jan 31 22:46:29.272: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4695 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 22:46:29.272: INFO: >>> kubeConfig: /root/.kube/config
I0131 22:46:29.317448       8 log.go:172] (0xc0029fa580) (0xc001392000) Create stream
I0131 22:46:29.317596       8 log.go:172] (0xc0029fa580) (0xc001392000) Stream added, broadcasting: 1
I0131 22:46:29.320768       8 log.go:172] (0xc0029fa580) Reply frame received for 1
I0131 22:46:29.320806       8 log.go:172] (0xc0029fa580) (0xc0010d80a0) Create stream
I0131 22:46:29.320814       8 log.go:172] (0xc0029fa580) (0xc0010d80a0) Stream added, broadcasting: 3
I0131 22:46:29.321979       8 log.go:172] (0xc0029fa580) Reply frame received for 3
I0131 22:46:29.321999       8 log.go:172] (0xc0029fa580) (0xc001718500) Create stream
I0131 22:46:29.322006       8 log.go:172] (0xc0029fa580) (0xc001718500) Stream added, broadcasting: 5
I0131 22:46:29.323403       8 log.go:172] (0xc0029fa580) Reply frame received for 5
I0131 22:46:29.411633       8 log.go:172] (0xc0029fa580) Data frame received for 3
I0131 22:46:29.411713       8 log.go:172] (0xc0010d80a0) (3) Data frame handling
I0131 22:46:29.411743       8 log.go:172] (0xc0010d80a0) (3) Data frame sent
I0131 22:46:29.494869       8 log.go:172] (0xc0029fa580) Data frame received for 1
I0131 22:46:29.495024       8 log.go:172] (0xc0029fa580) (0xc001718500) Stream removed, broadcasting: 5
I0131 22:46:29.495113       8 log.go:172] (0xc001392000) (1) Data frame handling
I0131 22:46:29.495193       8 log.go:172] (0xc001392000) (1) Data frame sent
I0131 22:46:29.495242       8 log.go:172] (0xc0029fa580) (0xc0010d80a0) Stream removed, broadcasting: 3
I0131 22:46:29.495269       8 log.go:172] (0xc0029fa580) (0xc001392000) Stream removed, broadcasting: 1
I0131 22:46:29.495278       8 log.go:172] (0xc0029fa580) Go away received
I0131 22:46:29.495711       8 log.go:172] (0xc0029fa580) (0xc001392000) Stream removed, broadcasting: 1
I0131 22:46:29.495759       8 log.go:172] (0xc0029fa580) (0xc0010d80a0) Stream removed, broadcasting: 3
I0131 22:46:29.495791       8 log.go:172] (0xc0029fa580) (0xc001718500) Stream removed, broadcasting: 5
Jan 31 22:46:29.495: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:46:29.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4695" for this suite.

• [SLOW TEST:37.024 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4118,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:46:29.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jan 31 22:46:29.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7884'
Jan 31 22:46:29.936: INFO: stderr: ""
Jan 31 22:46:29.936: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 22:46:29.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7884'
Jan 31 22:46:30.124: INFO: stderr: ""
Jan 31 22:46:30.125: INFO: stdout: "update-demo-nautilus-klswx update-demo-nautilus-l7526 "
Jan 31 22:46:30.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klswx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:46:30.253: INFO: stderr: ""
Jan 31 22:46:30.253: INFO: stdout: ""
Jan 31 22:46:30.253: INFO: update-demo-nautilus-klswx is created but not running
Jan 31 22:46:35.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7884'
Jan 31 22:46:36.346: INFO: stderr: ""
Jan 31 22:46:36.347: INFO: stdout: "update-demo-nautilus-klswx update-demo-nautilus-l7526 "
Jan 31 22:46:36.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klswx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:46:36.637: INFO: stderr: ""
Jan 31 22:46:36.637: INFO: stdout: ""
Jan 31 22:46:36.637: INFO: update-demo-nautilus-klswx is created but not running
Jan 31 22:46:41.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7884'
Jan 31 22:46:41.765: INFO: stderr: ""
Jan 31 22:46:41.765: INFO: stdout: "update-demo-nautilus-klswx update-demo-nautilus-l7526 "
Jan 31 22:46:41.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klswx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:46:41.871: INFO: stderr: ""
Jan 31 22:46:41.871: INFO: stdout: "true"
Jan 31 22:46:41.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klswx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:46:41.985: INFO: stderr: ""
Jan 31 22:46:41.985: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:46:41.985: INFO: validating pod update-demo-nautilus-klswx
Jan 31 22:46:41.991: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:46:41.991: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:46:41.991: INFO: update-demo-nautilus-klswx is verified up and running
Jan 31 22:46:41.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7526 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:46:42.081: INFO: stderr: ""
Jan 31 22:46:42.082: INFO: stdout: "true"
Jan 31 22:46:42.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7526 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:46:42.222: INFO: stderr: ""
Jan 31 22:46:42.223: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 22:46:42.223: INFO: validating pod update-demo-nautilus-l7526
Jan 31 22:46:42.421: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 22:46:42.422: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 22:46:42.422: INFO: update-demo-nautilus-l7526 is verified up and running
STEP: rolling-update to new replication controller
Jan 31 22:46:42.427: INFO: scanned /root for discovery docs: 
Jan 31 22:46:42.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7884'
Jan 31 22:47:14.046: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 31 22:47:14.046: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 22:47:14.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7884'
Jan 31 22:47:14.170: INFO: stderr: ""
Jan 31 22:47:14.170: INFO: stdout: "update-demo-kitten-qqkf8 update-demo-kitten-rxhnq "
Jan 31 22:47:14.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qqkf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:47:14.277: INFO: stderr: ""
Jan 31 22:47:14.277: INFO: stdout: "true"
Jan 31 22:47:14.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qqkf8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:47:14.398: INFO: stderr: ""
Jan 31 22:47:14.398: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 31 22:47:14.398: INFO: validating pod update-demo-kitten-qqkf8
Jan 31 22:47:14.406: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 31 22:47:14.406: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 31 22:47:14.406: INFO: update-demo-kitten-qqkf8 is verified up and running
Jan 31 22:47:14.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rxhnq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:47:14.561: INFO: stderr: ""
Jan 31 22:47:14.561: INFO: stdout: "true"
Jan 31 22:47:14.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rxhnq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7884'
Jan 31 22:47:14.661: INFO: stderr: ""
Jan 31 22:47:14.662: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 31 22:47:14.662: INFO: validating pod update-demo-kitten-rxhnq
Jan 31 22:47:14.682: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 31 22:47:14.682: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 31 22:47:14.682: INFO: update-demo-kitten-rxhnq is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:47:14.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7884" for this suite.

• [SLOW TEST:45.177 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":252,"skipped":4121,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:47:14.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-sdjn
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 22:47:14.886: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-sdjn" in namespace "subpath-8214" to be "success or failure"
Jan 31 22:47:14.904: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 17.993505ms
Jan 31 22:47:16.913: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026903712s
Jan 31 22:47:18.920: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034408116s
Jan 31 22:47:22.118: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 7.232188783s
Jan 31 22:47:24.829: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Pending", Reason="", readiness=false. Elapsed: 9.943338516s
Jan 31 22:47:26.836: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 11.950364706s
Jan 31 22:47:28.844: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 13.957807806s
Jan 31 22:47:30.851: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 15.965381377s
Jan 31 22:47:32.871: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 17.98472946s
Jan 31 22:47:34.879: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 19.993422743s
Jan 31 22:47:36.892: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 22.006425254s
Jan 31 22:47:38.900: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 24.013915239s
Jan 31 22:47:40.906: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 26.020384037s
Jan 31 22:47:42.914: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Running", Reason="", readiness=true. Elapsed: 28.027965238s
Jan 31 22:47:44.924: INFO: Pod "pod-subpath-test-projected-sdjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.038519526s
STEP: Saw pod success
Jan 31 22:47:44.925: INFO: Pod "pod-subpath-test-projected-sdjn" satisfied condition "success or failure"
Jan 31 22:47:44.929: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-sdjn container test-container-subpath-projected-sdjn: 
STEP: delete the pod
Jan 31 22:47:45.237: INFO: Waiting for pod pod-subpath-test-projected-sdjn to disappear
Jan 31 22:47:45.264: INFO: Pod pod-subpath-test-projected-sdjn no longer exists
STEP: Deleting pod pod-subpath-test-projected-sdjn
Jan 31 22:47:45.264: INFO: Deleting pod "pod-subpath-test-projected-sdjn" in namespace "subpath-8214"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:47:45.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8214" for this suite.

• [SLOW TEST:30.592 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":253,"skipped":4191,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:47:45.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jan 31 22:47:56.120: INFO: Successfully updated pod "adopt-release-qj55n"
STEP: Checking that the Job readopts the Pod
Jan 31 22:47:56.120: INFO: Waiting up to 15m0s for pod "adopt-release-qj55n" in namespace "job-2130" to be "adopted"
Jan 31 22:47:56.141: INFO: Pod "adopt-release-qj55n": Phase="Running", Reason="", readiness=true. Elapsed: 20.342919ms
Jan 31 22:47:56.141: INFO: Pod "adopt-release-qj55n" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jan 31 22:47:56.666: INFO: Successfully updated pod "adopt-release-qj55n"
STEP: Checking that the Job releases the Pod
Jan 31 22:47:56.666: INFO: Waiting up to 15m0s for pod "adopt-release-qj55n" in namespace "job-2130" to be "released"
Jan 31 22:47:56.687: INFO: Pod "adopt-release-qj55n": Phase="Running", Reason="", readiness=true. Elapsed: 20.217276ms
Jan 31 22:47:58.696: INFO: Pod "adopt-release-qj55n": Phase="Running", Reason="", readiness=true. Elapsed: 2.030073403s
Jan 31 22:47:58.697: INFO: Pod "adopt-release-qj55n" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:47:58.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2130" for this suite.

• [SLOW TEST:13.418 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":254,"skipped":4225,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:47:58.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:47:59.424: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:48:01.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:48:03.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:48:05.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:48:07.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716107679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:48:10.494: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:48:10.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5715" for this suite.
STEP: Destroying namespace "webhook-5715-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.976 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":255,"skipped":4246,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:48:10.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:48:10.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6078" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":256,"skipped":4252,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:48:10.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-71219c76-7f04-40f1-a735-2b45e837b9fc
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:48:11.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7174" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":257,"skipped":4273,"failed":0}
SSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:48:11.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:48:11.154: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-891e9a4a-f315-44c0-8b4e-44e10822bf8d" in namespace "security-context-test-544" to be "success or failure"
Jan 31 22:48:11.192: INFO: Pod "busybox-readonly-false-891e9a4a-f315-44c0-8b4e-44e10822bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.715958ms
Jan 31 22:48:13.198: INFO: Pod "busybox-readonly-false-891e9a4a-f315-44c0-8b4e-44e10822bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043690144s
Jan 31 22:48:15.256: INFO: Pod "busybox-readonly-false-891e9a4a-f315-44c0-8b4e-44e10822bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102439315s
Jan 31 22:48:17.273: INFO: Pod "busybox-readonly-false-891e9a4a-f315-44c0-8b4e-44e10822bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118982567s
Jan 31 22:48:19.279: INFO: Pod "busybox-readonly-false-891e9a4a-f315-44c0-8b4e-44e10822bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124772562s
Jan 31 22:48:21.287: INFO: Pod "busybox-readonly-false-891e9a4a-f315-44c0-8b4e-44e10822bf8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132639917s
Jan 31 22:48:21.287: INFO: Pod "busybox-readonly-false-891e9a4a-f315-44c0-8b4e-44e10822bf8d" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:48:21.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-544" for this suite.

• [SLOW TEST:10.282 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4277,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:48:21.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-4a2b712c-754b-4190-86f3-1aa11f5a3b7b
STEP: Creating a pod to test consume secrets
Jan 31 22:48:21.406: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a" in namespace "projected-1128" to be "success or failure"
Jan 31 22:48:21.419: INFO: Pod "pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.325035ms
Jan 31 22:48:23.431: INFO: Pod "pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025256978s
Jan 31 22:48:25.468: INFO: Pod "pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06242473s
Jan 31 22:48:27.475: INFO: Pod "pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069198108s
Jan 31 22:48:29.482: INFO: Pod "pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076224985s
STEP: Saw pod success
Jan 31 22:48:29.482: INFO: Pod "pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a" satisfied condition "success or failure"
Jan 31 22:48:29.487: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 22:48:29.582: INFO: Waiting for pod pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a to disappear
Jan 31 22:48:29.602: INFO: Pod pod-projected-secrets-53045f61-b572-4197-a6fa-be08a0ea349a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:48:29.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1128" for this suite.

• [SLOW TEST:8.320 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4278,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:48:29.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-6623
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-6623
STEP: creating replication controller externalsvc in namespace services-6623
I0131 22:48:30.207701       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6623, replica count: 2
I0131 22:48:33.258657       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:48:36.259452       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:48:39.260095       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:48:42.260523       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jan 31 22:48:42.288: INFO: Creating new exec pod
Jan 31 22:48:50.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6623 execpod64mcr -- /bin/sh -x -c nslookup nodeport-service'
Jan 31 22:48:50.861: INFO: stderr: "I0131 22:48:50.605869    3996 log.go:172] (0xc000bc4bb0) (0xc000bba280) Create stream\nI0131 22:48:50.606284    3996 log.go:172] (0xc000bc4bb0) (0xc000bba280) Stream added, broadcasting: 1\nI0131 22:48:50.615934    3996 log.go:172] (0xc000bc4bb0) Reply frame received for 1\nI0131 22:48:50.616296    3996 log.go:172] (0xc000bc4bb0) (0xc000653e00) Create stream\nI0131 22:48:50.616354    3996 log.go:172] (0xc000bc4bb0) (0xc000653e00) Stream added, broadcasting: 3\nI0131 22:48:50.619783    3996 log.go:172] (0xc000bc4bb0) Reply frame received for 3\nI0131 22:48:50.619903    3996 log.go:172] (0xc000bc4bb0) (0xc000bba320) Create stream\nI0131 22:48:50.619929    3996 log.go:172] (0xc000bc4bb0) (0xc000bba320) Stream added, broadcasting: 5\nI0131 22:48:50.623215    3996 log.go:172] (0xc000bc4bb0) Reply frame received for 5\nI0131 22:48:50.749061    3996 log.go:172] (0xc000bc4bb0) Data frame received for 5\nI0131 22:48:50.749826    3996 log.go:172] (0xc000bba320) (5) Data frame handling\nI0131 22:48:50.749867    3996 log.go:172] (0xc000bba320) (5) Data frame sent\n+ nslookup nodeport-service\nI0131 22:48:50.778273    3996 log.go:172] (0xc000bc4bb0) Data frame received for 3\nI0131 22:48:50.778927    3996 log.go:172] (0xc000653e00) (3) Data frame handling\nI0131 22:48:50.779140    3996 log.go:172] (0xc000653e00) (3) Data frame sent\nI0131 22:48:50.782441    3996 log.go:172] (0xc000bc4bb0) Data frame received for 3\nI0131 22:48:50.782454    3996 log.go:172] (0xc000653e00) (3) Data frame handling\nI0131 22:48:50.782462    3996 log.go:172] (0xc000653e00) (3) Data frame sent\nI0131 22:48:50.846448    3996 log.go:172] (0xc000bc4bb0) Data frame received for 1\nI0131 22:48:50.846711    3996 log.go:172] (0xc000bba280) (1) Data frame handling\nI0131 22:48:50.846741    3996 log.go:172] (0xc000bba280) (1) Data frame sent\nI0131 22:48:50.846782    3996 log.go:172] (0xc000bc4bb0) (0xc000bba280) Stream removed, broadcasting: 1\nI0131 22:48:50.849187    3996 log.go:172] (0xc000bc4bb0) (0xc000653e00) Stream removed, broadcasting: 3\nI0131 22:48:50.849275    3996 log.go:172] (0xc000bc4bb0) (0xc000bba320) Stream removed, broadcasting: 5\nI0131 22:48:50.849320    3996 log.go:172] (0xc000bc4bb0) Go away received\nI0131 22:48:50.850049    3996 log.go:172] (0xc000bc4bb0) (0xc000bba280) Stream removed, broadcasting: 1\nI0131 22:48:50.850090    3996 log.go:172] (0xc000bc4bb0) (0xc000653e00) Stream removed, broadcasting: 3\nI0131 22:48:50.850124    3996 log.go:172] (0xc000bc4bb0) (0xc000bba320) Stream removed, broadcasting: 5\n"
Jan 31 22:48:50.861: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6623.svc.cluster.local\tcanonical name = externalsvc.services-6623.svc.cluster.local.\nName:\texternalsvc.services-6623.svc.cluster.local\nAddress: 10.96.36.245\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-6623, will wait for the garbage collector to delete the pods
Jan 31 22:48:50.936: INFO: Deleting ReplicationController externalsvc took: 14.763095ms
Jan 31 22:48:51.236: INFO: Terminating ReplicationController externalsvc pods took: 300.364022ms
Jan 31 22:49:03.229: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:49:03.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6623" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:33.650 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":260,"skipped":4281,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:49:03.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-3722
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3722 to expose endpoints map[]
Jan 31 22:49:03.385: INFO: Get endpoints failed (10.084076ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 31 22:49:04.391: INFO: successfully validated that service multi-endpoint-test in namespace services-3722 exposes endpoints map[] (1.016067296s elapsed)
STEP: Creating pod pod1 in namespace services-3722
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3722 to expose endpoints map[pod1:[100]]
Jan 31 22:49:08.486: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.072976273s elapsed, will retry)
Jan 31 22:49:13.568: INFO: successfully validated that service multi-endpoint-test in namespace services-3722 exposes endpoints map[pod1:[100]] (9.154420662s elapsed)
STEP: Creating pod pod2 in namespace services-3722
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3722 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 31 22:49:19.578: INFO: Unexpected endpoints: found map[864bf68b-9588-4f96-b4bd-dee77377c523:[100]], expected map[pod1:[100] pod2:[101]] (6.001543471s elapsed, will retry)
Jan 31 22:49:22.636: INFO: successfully validated that service multi-endpoint-test in namespace services-3722 exposes endpoints map[pod1:[100] pod2:[101]] (9.059642327s elapsed)
STEP: Deleting pod pod1 in namespace services-3722
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3722 to expose endpoints map[pod2:[101]]
Jan 31 22:49:22.683: INFO: successfully validated that service multi-endpoint-test in namespace services-3722 exposes endpoints map[pod2:[101]] (38.93223ms elapsed)
STEP: Deleting pod pod2 in namespace services-3722
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3722 to expose endpoints map[]
Jan 31 22:49:23.799: INFO: successfully validated that service multi-endpoint-test in namespace services-3722 exposes endpoints map[] (1.052521878s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:49:23.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3722" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:20.827 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":261,"skipped":4301,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:49:24.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 31 22:49:32.803: INFO: Successfully updated pod "labelsupdateb8285213-5443-46c4-9692-3f8f198508d4"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:49:36.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8453" for this suite.

• [SLOW TEST:12.857 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4327,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:49:36.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7478
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 31 22:49:37.142: INFO: Found 0 stateful pods, waiting for 3
Jan 31 22:49:47.448: INFO: Found 2 stateful pods, waiting for 3
Jan 31 22:49:57.149: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:49:57.149: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:49:57.149: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 22:50:07.152: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:50:07.152: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:50:07.152: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 22:50:07.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7478 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 22:50:07.733: INFO: stderr: "I0131 22:50:07.413059    4017 log.go:172] (0xc000b446e0) (0xc0004dc000) Create stream\nI0131 22:50:07.413276    4017 log.go:172] (0xc000b446e0) (0xc0004dc000) Stream added, broadcasting: 1\nI0131 22:50:07.417707    4017 log.go:172] (0xc000b446e0) Reply frame received for 1\nI0131 22:50:07.417815    4017 log.go:172] (0xc000b446e0) (0xc0006abae0) Create stream\nI0131 22:50:07.417847    4017 log.go:172] (0xc000b446e0) (0xc0006abae0) Stream added, broadcasting: 3\nI0131 22:50:07.419662    4017 log.go:172] (0xc000b446e0) Reply frame received for 3\nI0131 22:50:07.419699    4017 log.go:172] (0xc000b446e0) (0xc00029a000) Create stream\nI0131 22:50:07.419711    4017 log.go:172] (0xc000b446e0) (0xc00029a000) Stream added, broadcasting: 5\nI0131 22:50:07.421243    4017 log.go:172] (0xc000b446e0) Reply frame received for 5\nI0131 22:50:07.522159    4017 log.go:172] (0xc000b446e0) Data frame received for 5\nI0131 22:50:07.522257    4017 log.go:172] (0xc00029a000) (5) Data frame handling\nI0131 22:50:07.522291    4017 log.go:172] (0xc00029a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 22:50:07.598402    4017 log.go:172] (0xc000b446e0) Data frame received for 3\nI0131 22:50:07.598501    4017 log.go:172] (0xc0006abae0) (3) Data frame handling\nI0131 22:50:07.598605    4017 log.go:172] (0xc0006abae0) (3) Data frame sent\nI0131 22:50:07.720859    4017 log.go:172] (0xc000b446e0) Data frame received for 1\nI0131 22:50:07.720930    4017 log.go:172] (0xc000b446e0) (0xc00029a000) Stream removed, broadcasting: 5\nI0131 22:50:07.721031    4017 log.go:172] (0xc0004dc000) (1) Data frame handling\nI0131 22:50:07.721077    4017 log.go:172] (0xc000b446e0) (0xc0006abae0) Stream removed, broadcasting: 3\nI0131 22:50:07.721127    4017 log.go:172] (0xc0004dc000) (1) Data frame sent\nI0131 22:50:07.721145    4017 log.go:172] (0xc000b446e0) (0xc0004dc000) Stream removed, broadcasting: 1\nI0131 22:50:07.721162    4017 log.go:172] (0xc000b446e0) Go away received\nI0131 22:50:07.722228    4017 log.go:172] (0xc000b446e0) (0xc0004dc000) Stream removed, broadcasting: 1\nI0131 22:50:07.722243    4017 log.go:172] (0xc000b446e0) (0xc0006abae0) Stream removed, broadcasting: 3\nI0131 22:50:07.722250    4017 log.go:172] (0xc000b446e0) (0xc00029a000) Stream removed, broadcasting: 5\n"
Jan 31 22:50:07.733: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 22:50:07.734: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 31 22:50:17.788: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 31 22:50:27.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7478 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 22:50:28.242: INFO: stderr: "I0131 22:50:28.071231    4039 log.go:172] (0xc000a40dc0) (0xc000a1e280) Create stream\nI0131 22:50:28.071501    4039 log.go:172] (0xc000a40dc0) (0xc000a1e280) Stream added, broadcasting: 1\nI0131 22:50:28.073895    4039 log.go:172] (0xc000a40dc0) Reply frame received for 1\nI0131 22:50:28.073956    4039 log.go:172] (0xc000a40dc0) (0xc0008d8000) Create stream\nI0131 22:50:28.073969    4039 log.go:172] (0xc000a40dc0) (0xc0008d8000) Stream added, broadcasting: 3\nI0131 22:50:28.075087    4039 log.go:172] (0xc000a40dc0) Reply frame received for 3\nI0131 22:50:28.075118    4039 log.go:172] (0xc000a40dc0) (0xc000697c20) Create stream\nI0131 22:50:28.075130    4039 log.go:172] (0xc000a40dc0) (0xc000697c20) Stream added, broadcasting: 5\nI0131 22:50:28.076884    4039 log.go:172] (0xc000a40dc0) Reply frame received for 5\nI0131 22:50:28.135963    4039 log.go:172] (0xc000a40dc0) Data frame received for 3\nI0131 22:50:28.136024    4039 log.go:172] (0xc0008d8000) (3) Data frame handling\nI0131 22:50:28.136055    4039 log.go:172] (0xc0008d8000) (3) Data frame sent\nI0131 22:50:28.136330    4039 log.go:172] (0xc000a40dc0) Data frame received for 5\nI0131 22:50:28.136341    4039 log.go:172] (0xc000697c20) (5) Data frame handling\nI0131 22:50:28.136354    4039 log.go:172] (0xc000697c20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 22:50:28.232114    4039 log.go:172] (0xc000a40dc0) (0xc0008d8000) Stream removed, broadcasting: 3\nI0131 22:50:28.232193    4039 log.go:172] (0xc000a40dc0) Data frame received for 1\nI0131 22:50:28.232207    4039 log.go:172] (0xc000a1e280) (1) Data frame handling\nI0131 22:50:28.232223    4039 log.go:172] (0xc000a1e280) (1) Data frame sent\nI0131 22:50:28.232301    4039 log.go:172] (0xc000a40dc0) (0xc000697c20) Stream removed, broadcasting: 5\nI0131 22:50:28.232334    4039 log.go:172] (0xc000a40dc0) (0xc000a1e280) Stream removed, broadcasting: 1\nI0131 22:50:28.232342    4039 log.go:172] (0xc000a40dc0) Go away received\nI0131 22:50:28.232865    4039 log.go:172] (0xc000a40dc0) (0xc000a1e280) Stream removed, broadcasting: 1\nI0131 22:50:28.232882    4039 log.go:172] (0xc000a40dc0) (0xc0008d8000) Stream removed, broadcasting: 3\nI0131 22:50:28.232891    4039 log.go:172] (0xc000a40dc0) (0xc000697c20) Stream removed, broadcasting: 5\n"
Jan 31 22:50:28.242: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 22:50:28.242: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 22:50:28.271: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
Jan 31 22:50:28.272: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 22:50:28.272: INFO: Waiting for Pod statefulset-7478/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 22:50:28.272: INFO: Waiting for Pod statefulset-7478/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 22:50:38.285: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
Jan 31 22:50:38.285: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 22:50:38.285: INFO: Waiting for Pod statefulset-7478/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 22:50:48.283: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
Jan 31 22:50:48.283: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 22:50:48.284: INFO: Waiting for Pod statefulset-7478/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 22:50:58.365: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
Jan 31 22:50:58.365: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 22:51:08.299: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 31 22:51:18.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7478 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 22:51:18.762: INFO: stderr: "I0131 22:51:18.466361    4059 log.go:172] (0xc0009b6000) (0xc0005c4780) Create stream\nI0131 22:51:18.467135    4059 log.go:172] (0xc0009b6000) (0xc0005c4780) Stream added, broadcasting: 1\nI0131 22:51:18.472749    4059 log.go:172] (0xc0009b6000) Reply frame received for 1\nI0131 22:51:18.472857    4059 log.go:172] (0xc0009b6000) (0xc000405540) Create stream\nI0131 22:51:18.472882    4059 log.go:172] (0xc0009b6000) (0xc000405540) Stream added, broadcasting: 3\nI0131 22:51:18.474119    4059 log.go:172] (0xc0009b6000) Reply frame received for 3\nI0131 22:51:18.474170    4059 log.go:172] (0xc0009b6000) (0xc000a2c000) Create stream\nI0131 22:51:18.474178    4059 log.go:172] (0xc0009b6000) (0xc000a2c000) Stream added, broadcasting: 5\nI0131 22:51:18.475951    4059 log.go:172] (0xc0009b6000) Reply frame received for 5\nI0131 22:51:18.570920    4059 log.go:172] (0xc0009b6000) Data frame received for 5\nI0131 22:51:18.571144    4059 log.go:172] (0xc000a2c000) (5) Data frame handling\nI0131 22:51:18.571176    4059 log.go:172] (0xc000a2c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 22:51:18.613475    4059 log.go:172] (0xc0009b6000) Data frame received for 3\nI0131 22:51:18.613583    4059 log.go:172] (0xc000405540) (3) Data frame handling\nI0131 22:51:18.613620    4059 log.go:172] (0xc000405540) (3) Data frame sent\nI0131 22:51:18.753897    4059 log.go:172] (0xc0009b6000) (0xc000405540) Stream removed, broadcasting: 3\nI0131 22:51:18.754330    4059 log.go:172] (0xc0009b6000) Data frame received for 1\nI0131 22:51:18.754440    4059 log.go:172] (0xc0009b6000) (0xc000a2c000) Stream removed, broadcasting: 5\nI0131 22:51:18.754515    4059 log.go:172] (0xc0005c4780) (1) Data frame handling\nI0131 22:51:18.754582    4059 log.go:172] (0xc0005c4780) (1) Data frame sent\nI0131 22:51:18.754603    4059 log.go:172] (0xc0009b6000) (0xc0005c4780) Stream removed, broadcasting: 1\nI0131 22:51:18.754639    4059 log.go:172] (0xc0009b6000) Go away received\nI0131 22:51:18.755731    4059 log.go:172] (0xc0009b6000) (0xc0005c4780) Stream removed, broadcasting: 1\nI0131 22:51:18.755772    4059 log.go:172] (0xc0009b6000) (0xc000405540) Stream removed, broadcasting: 3\nI0131 22:51:18.755787    4059 log.go:172] (0xc0009b6000) (0xc000a2c000) Stream removed, broadcasting: 5\n"
Jan 31 22:51:18.762: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 22:51:18.762: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 22:51:28.835: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 31 22:51:38.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7478 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 22:51:39.323: INFO: stderr: "I0131 22:51:39.123966    4079 log.go:172] (0xc000a980b0) (0xc0009bc140) Create stream\nI0131 22:51:39.124292    4079 log.go:172] (0xc000a980b0) (0xc0009bc140) Stream added, broadcasting: 1\nI0131 22:51:39.127674    4079 log.go:172] (0xc000a980b0) Reply frame received for 1\nI0131 22:51:39.127738    4079 log.go:172] (0xc000a980b0) (0xc0009bc280) Create stream\nI0131 22:51:39.127752    4079 log.go:172] (0xc000a980b0) (0xc0009bc280) Stream added, broadcasting: 3\nI0131 22:51:39.129460    4079 log.go:172] (0xc000a980b0) Reply frame received for 3\nI0131 22:51:39.129482    4079 log.go:172] (0xc000a980b0) (0xc0009bc320) Create stream\nI0131 22:51:39.129490    4079 log.go:172] (0xc000a980b0) (0xc0009bc320) Stream added, broadcasting: 5\nI0131 22:51:39.130786    4079 log.go:172] (0xc000a980b0) Reply frame received for 5\nI0131 22:51:39.198909    4079 log.go:172] (0xc000a980b0) Data frame received for 3\nI0131 22:51:39.199050    4079 log.go:172] (0xc0009bc280) (3) Data frame handling\nI0131 22:51:39.199084    4079 log.go:172] (0xc0009bc280) (3) Data frame sent\nI0131 22:51:39.199141    4079 log.go:172] (0xc000a980b0) Data frame received for 5\nI0131 22:51:39.199152    4079 log.go:172] (0xc0009bc320) (5) Data frame handling\nI0131 22:51:39.199170    4079 log.go:172] (0xc0009bc320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 22:51:39.309479    4079 log.go:172] (0xc000a980b0) Data frame received for 1\nI0131 22:51:39.309593    4079 log.go:172] (0xc0009bc140) (1) Data frame handling\nI0131 22:51:39.309635    4079 log.go:172] (0xc0009bc140) (1) Data frame sent\nI0131 22:51:39.309693    4079 log.go:172] (0xc000a980b0) (0xc0009bc140) Stream removed, broadcasting: 1\nI0131 22:51:39.312553    4079 log.go:172] (0xc000a980b0) (0xc0009bc280) Stream removed, broadcasting: 3\nI0131 22:51:39.312689    4079 log.go:172] (0xc000a980b0) (0xc0009bc320) Stream removed, broadcasting: 5\nI0131 22:51:39.312750    4079 log.go:172] (0xc000a980b0) (0xc0009bc140) Stream removed, broadcasting: 1\nI0131 22:51:39.312771    4079 log.go:172] (0xc000a980b0) (0xc0009bc280) Stream removed, broadcasting: 3\nI0131 22:51:39.312785    4079 log.go:172] (0xc000a980b0) (0xc0009bc320) Stream removed, broadcasting: 5\n"
Jan 31 22:51:39.323: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 22:51:39.323: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 22:51:49.361: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
Jan 31 22:51:49.361: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 22:51:49.361: INFO: Waiting for Pod statefulset-7478/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 22:51:59.416: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
Jan 31 22:51:59.416: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 22:52:09.373: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
Jan 31 22:52:09.373: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 22:52:19.371: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
Jan 31 22:52:19.371: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 22:52:29.376: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 31 22:52:39.374: INFO: Deleting all statefulset in ns statefulset-7478
Jan 31 22:52:39.378: INFO: Scaling statefulset ss2 to 0
Jan 31 22:53:09.416: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 22:53:09.420: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:53:09.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7478" for this suite.

• [SLOW TEST:212.513 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":263,"skipped":4342,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:53:09.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-6d810783-53db-45e6-aae9-503b8a4552d6
STEP: Creating a pod to test consume secrets
Jan 31 22:53:09.590: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb" in namespace "projected-8604" to be "success or failure"
Jan 31 22:53:09.608: INFO: Pod "pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.640164ms
Jan 31 22:53:11.616: INFO: Pod "pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025018931s
Jan 31 22:53:13.645: INFO: Pod "pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054874415s
Jan 31 22:53:15.772: INFO: Pod "pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181363503s
Jan 31 22:53:17.791: INFO: Pod "pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200715323s
Jan 31 22:53:19.804: INFO: Pod "pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.213469933s
STEP: Saw pod success
Jan 31 22:53:19.804: INFO: Pod "pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb" satisfied condition "success or failure"
Jan 31 22:53:19.815: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 22:53:20.019: INFO: Waiting for pod pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb to disappear
Jan 31 22:53:20.027: INFO: Pod pod-projected-secrets-d1eeb7b6-8274-4b7d-8118-4819cdb2d9bb no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:53:20.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8604" for this suite.

• [SLOW TEST:10.565 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4349,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:53:20.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 31 22:53:34.267: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 22:53:34.271: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 22:53:36.272: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 22:53:36.278: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 22:53:38.271: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 22:53:38.280: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 22:53:40.271: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 22:53:40.276: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 22:53:42.272: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 22:53:42.280: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 22:53:44.271: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 22:53:44.277: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:53:44.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2031" for this suite.

• [SLOW TEST:24.264 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4358,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:53:44.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:53:44.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan 31 22:53:47.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2414 create -f -'
Jan 31 22:53:50.284: INFO: stderr: ""
Jan 31 22:53:50.284: INFO: stdout: "e2e-test-crd-publish-openapi-2731-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 31 22:53:50.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2414 delete e2e-test-crd-publish-openapi-2731-crds test-foo'
Jan 31 22:53:50.428: INFO: stderr: ""
Jan 31 22:53:50.428: INFO: stdout: "e2e-test-crd-publish-openapi-2731-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan 31 22:53:50.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2414 apply -f -'
Jan 31 22:53:50.783: INFO: stderr: ""
Jan 31 22:53:50.784: INFO: stdout: "e2e-test-crd-publish-openapi-2731-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 31 22:53:50.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2414 delete e2e-test-crd-publish-openapi-2731-crds test-foo'
Jan 31 22:53:51.032: INFO: stderr: ""
Jan 31 22:53:51.032: INFO: stdout: "e2e-test-crd-publish-openapi-2731-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan 31 22:53:51.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2414 create -f -'
Jan 31 22:53:51.470: INFO: rc: 1
Jan 31 22:53:51.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2414 apply -f -'
Jan 31 22:53:51.767: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan 31 22:53:51.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2414 create -f -'
Jan 31 22:53:52.109: INFO: rc: 1
Jan 31 22:53:52.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2414 apply -f -'
Jan 31 22:53:52.644: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan 31 22:53:52.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2731-crds'
Jan 31 22:53:52.943: INFO: stderr: ""
Jan 31 22:53:52.943: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2731-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan 31 22:53:52.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2731-crds.metadata'
Jan 31 22:53:53.441: INFO: stderr: ""
Jan 31 22:53:53.441: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2731-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan 31 22:53:53.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2731-crds.spec'
Jan 31 22:53:53.775: INFO: stderr: ""
Jan 31 22:53:53.775: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2731-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 31 22:53:53.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2731-crds.spec.bars'
Jan 31 22:53:54.171: INFO: stderr: ""
Jan 31 22:53:54.172: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2731-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 31 22:53:54.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2731-crds.spec.bars2'
Jan 31 22:53:54.441: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:53:57.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2414" for this suite.

• [SLOW TEST:13.112 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":266,"skipped":4396,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:53:57.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 22:53:57.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6654'
Jan 31 22:53:57.642: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 22:53:57.643: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Jan 31 22:53:59.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6654'
Jan 31 22:53:59.999: INFO: stderr: ""
Jan 31 22:53:59.999: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:54:00.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6654" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":267,"skipped":4415,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:54:00.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7801
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-7801
I0131 22:54:00.166880       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7801, replica count: 2
I0131 22:54:03.218318       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:54:06.218677       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:54:09.219092       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 22:54:12.219741       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 22:54:12.220: INFO: Creating new exec pod
Jan 31 22:54:21.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7801 execpod4ffkh -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 31 22:54:21.636: INFO: stderr: "I0131 22:54:21.422692    4415 log.go:172] (0xc0000f5970) (0xc000665cc0) Create stream\nI0131 22:54:21.422854    4415 log.go:172] (0xc0000f5970) (0xc000665cc0) Stream added, broadcasting: 1\nI0131 22:54:21.426691    4415 log.go:172] (0xc0000f5970) Reply frame received for 1\nI0131 22:54:21.426729    4415 log.go:172] (0xc0000f5970) (0xc000665d60) Create stream\nI0131 22:54:21.426740    4415 log.go:172] (0xc0000f5970) (0xc000665d60) Stream added, broadcasting: 3\nI0131 22:54:21.428714    4415 log.go:172] (0xc0000f5970) Reply frame received for 3\nI0131 22:54:21.428743    4415 log.go:172] (0xc0000f5970) (0xc00090e000) Create stream\nI0131 22:54:21.429416    4415 log.go:172] (0xc0000f5970) (0xc00090e000) Stream added, broadcasting: 5\nI0131 22:54:21.434741    4415 log.go:172] (0xc0000f5970) Reply frame received for 5\nI0131 22:54:21.531230    4415 log.go:172] (0xc0000f5970) Data frame received for 5\nI0131 22:54:21.531604    4415 log.go:172] (0xc00090e000) (5) Data frame handling\nI0131 22:54:21.531675    4415 log.go:172] (0xc00090e000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0131 22:54:21.536374    4415 log.go:172] (0xc0000f5970) Data frame received for 5\nI0131 22:54:21.536412    4415 log.go:172] (0xc00090e000) (5) Data frame handling\nI0131 22:54:21.536429    4415 log.go:172] (0xc00090e000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0131 22:54:21.625220    4415 log.go:172] (0xc0000f5970) (0xc000665d60) Stream removed, broadcasting: 3\nI0131 22:54:21.625362    4415 log.go:172] (0xc0000f5970) Data frame received for 1\nI0131 22:54:21.625377    4415 log.go:172] (0xc000665cc0) (1) Data frame handling\nI0131 22:54:21.625394    4415 log.go:172] (0xc000665cc0) (1) Data frame sent\nI0131 22:54:21.625403    4415 log.go:172] (0xc0000f5970) (0xc000665cc0) Stream removed, broadcasting: 1\nI0131 22:54:21.625459    4415 log.go:172] (0xc0000f5970) (0xc00090e000) Stream removed, broadcasting: 5\nI0131 22:54:21.625512    4415 log.go:172] (0xc0000f5970) Go away received\nI0131 22:54:21.625921    4415 log.go:172] (0xc0000f5970) (0xc000665cc0) Stream removed, broadcasting: 1\nI0131 22:54:21.625942    4415 log.go:172] (0xc0000f5970) (0xc000665d60) Stream removed, broadcasting: 3\nI0131 22:54:21.625950    4415 log.go:172] (0xc0000f5970) (0xc00090e000) Stream removed, broadcasting: 5\n"
Jan 31 22:54:21.636: INFO: stdout: ""
Jan 31 22:54:21.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7801 execpod4ffkh -- /bin/sh -x -c nc -zv -t -w 2 10.96.100.64 80'
Jan 31 22:54:22.156: INFO: stderr: "I0131 22:54:21.864675    4434 log.go:172] (0xc0009c04d0) (0xc0005534a0) Create stream\nI0131 22:54:21.864935    4434 log.go:172] (0xc0009c04d0) (0xc0005534a0) Stream added, broadcasting: 1\nI0131 22:54:21.874112    4434 log.go:172] (0xc0009c04d0) Reply frame received for 1\nI0131 22:54:21.874160    4434 log.go:172] (0xc0009c04d0) (0xc00091a0a0) Create stream\nI0131 22:54:21.874169    4434 log.go:172] (0xc0009c04d0) (0xc00091a0a0) Stream added, broadcasting: 3\nI0131 22:54:21.875572    4434 log.go:172] (0xc0009c04d0) Reply frame received for 3\nI0131 22:54:21.875621    4434 log.go:172] (0xc0009c04d0) (0xc00067fae0) Create stream\nI0131 22:54:21.875638    4434 log.go:172] (0xc0009c04d0) (0xc00067fae0) Stream added, broadcasting: 5\nI0131 22:54:21.877104    4434 log.go:172] (0xc0009c04d0) Reply frame received for 5\nI0131 22:54:21.978391    4434 log.go:172] (0xc0009c04d0) Data frame received for 5\nI0131 22:54:21.978578    4434 log.go:172] (0xc00067fae0) (5) Data frame handling\nI0131 22:54:21.978629    4434 log.go:172] (0xc00067fae0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.100.64 80\nI0131 22:54:21.987309    4434 log.go:172] (0xc0009c04d0) Data frame received for 5\nI0131 22:54:21.987454    4434 log.go:172] (0xc00067fae0) (5) Data frame handling\nI0131 22:54:21.987476    4434 log.go:172] (0xc00067fae0) (5) Data frame sent\nConnection to 10.96.100.64 80 port [tcp/http] succeeded!\nI0131 22:54:22.127337    4434 log.go:172] (0xc0009c04d0) (0xc00091a0a0) Stream removed, broadcasting: 3\nI0131 22:54:22.127886    4434 log.go:172] (0xc0009c04d0) Data frame received for 1\nI0131 22:54:22.127914    4434 log.go:172] (0xc0005534a0) (1) Data frame handling\nI0131 22:54:22.127944    4434 log.go:172] (0xc0005534a0) (1) Data frame sent\nI0131 22:54:22.127963    4434 log.go:172] (0xc0009c04d0) (0xc0005534a0) Stream removed, broadcasting: 1\nI0131 22:54:22.129623    4434 log.go:172] (0xc0009c04d0) (0xc00067fae0) Stream removed, broadcasting: 5\nI0131 22:54:22.129706    4434 log.go:172] (0xc0009c04d0) (0xc0005534a0) Stream removed, broadcasting: 1\nI0131 22:54:22.129730    4434 log.go:172] (0xc0009c04d0) (0xc00091a0a0) Stream removed, broadcasting: 3\nI0131 22:54:22.129746    4434 log.go:172] (0xc0009c04d0) (0xc00067fae0) Stream removed, broadcasting: 5\nI0131 22:54:22.130189    4434 log.go:172] (0xc0009c04d0) Go away received\n"
Jan 31 22:54:22.157: INFO: stdout: ""
Jan 31 22:54:22.157: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:54:22.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7801" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.237 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":268,"skipped":4419,"failed":0}
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:54:22.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 31 22:54:22.366: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:54:40.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8096" for this suite.

• [SLOW TEST:18.004 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":269,"skipped":4419,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:54:40.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 31 22:54:40.408: INFO: Waiting up to 5m0s for pod "pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f" in namespace "emptydir-6984" to be "success or failure"
Jan 31 22:54:40.425: INFO: Pod "pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.533916ms
Jan 31 22:54:42.433: INFO: Pod "pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025698398s
Jan 31 22:54:44.441: INFO: Pod "pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032922544s
Jan 31 22:54:46.450: INFO: Pod "pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042078822s
Jan 31 22:54:48.457: INFO: Pod "pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049020567s
Jan 31 22:54:50.465: INFO: Pod "pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057239582s
STEP: Saw pod success
Jan 31 22:54:50.465: INFO: Pod "pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f" satisfied condition "success or failure"
Jan 31 22:54:50.470: INFO: Trying to get logs from node jerma-node pod pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f container test-container: 
STEP: delete the pod
Jan 31 22:54:50.561: INFO: Waiting for pod pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f to disappear
Jan 31 22:54:50.583: INFO: Pod pod-5198e3bf-8f57-451d-9d29-dc15e8085e9f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:54:50.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6984" for this suite.

• [SLOW TEST:10.408 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4434,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:54:50.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 22:54:58.966: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:54:58.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7810" for this suite.

• [SLOW TEST:8.339 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:54:59.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 31 22:54:59.121: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-943 /api/v1/namespaces/watch-943/configmaps/e2e-watch-test-watch-closed 62ecd917-1af4-4be5-8479-35143a24eaf7 5620085 0 2020-01-31 22:54:59 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 22:54:59.122: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-943 /api/v1/namespaces/watch-943/configmaps/e2e-watch-test-watch-closed 62ecd917-1af4-4be5-8479-35143a24eaf7 5620086 0 2020-01-31 22:54:59 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 31 22:54:59.197: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-943 /api/v1/namespaces/watch-943/configmaps/e2e-watch-test-watch-closed 62ecd917-1af4-4be5-8479-35143a24eaf7 5620087 0 2020-01-31 22:54:59 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 22:54:59.197: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-943 /api/v1/namespaces/watch-943/configmaps/e2e-watch-test-watch-closed 62ecd917-1af4-4be5-8479-35143a24eaf7 5620088 0 2020-01-31 22:54:59 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:54:59.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-943" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":272,"skipped":4471,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:54:59.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 31 22:54:59.293: INFO: >>> kubeConfig: /root/.kube/config
Jan 31 22:55:02.382: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:55:14.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3786" for this suite.

• [SLOW TEST:15.310 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":273,"skipped":4480,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:55:14.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0131 22:55:45.135491       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 22:55:45.135: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:55:45.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6093" for this suite.

• [SLOW TEST:30.627 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":274,"skipped":4485,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:55:45.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-204a66a8-232f-4676-a8e5-c9e9515a7743
STEP: Creating a pod to test consume secrets
Jan 31 22:55:45.286: INFO: Waiting up to 5m0s for pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c" in namespace "secrets-1241" to be "success or failure"
Jan 31 22:55:45.320: INFO: Pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.609567ms
Jan 31 22:55:47.332: INFO: Pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045174206s
Jan 31 22:55:49.612: INFO: Pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325533744s
Jan 31 22:55:51.963: INFO: Pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.676147969s
Jan 31 22:55:53.981: INFO: Pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.694162053s
Jan 31 22:55:56.005: INFO: Pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.718600379s
Jan 31 22:55:58.010: INFO: Pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.72392s
STEP: Saw pod success
Jan 31 22:55:58.011: INFO: Pod "pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c" satisfied condition "success or failure"
Jan 31 22:55:58.015: INFO: Trying to get logs from node jerma-node pod pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c container secret-volume-test: 
STEP: delete the pod
Jan 31 22:55:58.266: INFO: Waiting for pod pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c to disappear
Jan 31 22:55:58.277: INFO: Pod pod-secrets-0f245e1d-629c-4715-a830-4174c1fb435c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:55:58.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1241" for this suite.

• [SLOW TEST:13.147 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4496,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:55:58.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Jan 31 22:55:58.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 31 22:55:58.965: INFO: stderr: ""
Jan 31 22:55:58.965: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:55:58.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2218" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":276,"skipped":4500,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:55:58.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 22:55:59.567: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 22:56:01.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:56:03.592: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 22:56:05.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716108159, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 22:56:08.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 31 22:56:08.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7823-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:56:09.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3520" for this suite.
STEP: Destroying namespace "webhook-3520-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.146 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":277,"skipped":4524,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 31 22:56:10.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 31 22:56:10.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db" in namespace "projected-4961" to be "success or failure"
Jan 31 22:56:10.245: INFO: Pod "downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db": Phase="Pending", Reason="", readiness=false. Elapsed: 22.337556ms
Jan 31 22:56:12.252: INFO: Pod "downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028667094s
Jan 31 22:56:14.257: INFO: Pod "downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033669646s
Jan 31 22:56:16.276: INFO: Pod "downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053404879s
Jan 31 22:56:18.283: INFO: Pod "downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060077076s
STEP: Saw pod success
Jan 31 22:56:18.283: INFO: Pod "downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db" satisfied condition "success or failure"
Jan 31 22:56:18.287: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db container client-container: 
STEP: delete the pod
Jan 31 22:56:18.351: INFO: Waiting for pod downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db to disappear
Jan 31 22:56:18.411: INFO: Pod downwardapi-volume-e8aa55ba-1962-4c5c-847a-c8fcd0f660db no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 31 22:56:18.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4961" for this suite.

• [SLOW TEST:8.286 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4526,"failed":0}
SSSSSSSSSSJan 31 22:56:18.421: INFO: Running AfterSuite actions on all nodes
Jan 31 22:56:18.421: INFO: Running AfterSuite actions on node 1
Jan 31 22:56:18.421: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0}

Ran 278 of 4814 Specs in 6413.520 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped
PASS