I0126 10:47:16.827541 8 e2e.go:224] Starting e2e run "378266df-4029-11ea-b664-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580035635 - Will randomize all specs Will run 201 of 2164 specs Jan 26 10:47:17.077: INFO: >>> kubeConfig: /root/.kube/config Jan 26 10:47:17.080: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 26 10:47:17.101: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 26 10:47:17.141: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 26 10:47:17.141: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 26 10:47:17.141: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 26 10:47:17.152: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 26 10:47:17.152: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 26 10:47:17.152: INFO: e2e test version: v1.13.12 Jan 26 10:47:17.153: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 26 10:47:17.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 26 10:47:17.341: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 26 10:47:17.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-kw2tr" to be "success or failure" Jan 26 10:47:17.484: INFO: Pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 126.103259ms Jan 26 10:47:19.497: INFO: Pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139710554s Jan 26 10:47:21.516: INFO: Pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158030776s Jan 26 10:47:24.145: INFO: Pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.787511707s Jan 26 10:47:26.216: INFO: Pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.858023997s Jan 26 10:47:28.229: INFO: Pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.871411571s Jan 26 10:47:30.405: INFO: Pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.047324115s STEP: Saw pod success Jan 26 10:47:30.405: INFO: Pod "downwardapi-volume-38781622-4029-11ea-b664-0242ac110005" satisfied condition "success or failure" Jan 26 10:47:30.416: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-38781622-4029-11ea-b664-0242ac110005 container client-container: STEP: delete the pod Jan 26 10:47:30.641: INFO: Waiting for pod downwardapi-volume-38781622-4029-11ea-b664-0242ac110005 to disappear Jan 26 10:47:30.695: INFO: Pod downwardapi-volume-38781622-4029-11ea-b664-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 26 10:47:30.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kw2tr" for this suite. Jan 26 10:47:36.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 10:47:36.928: INFO: namespace: e2e-tests-projected-kw2tr, resource: bindings, ignored listing per whitelist Jan 26 10:47:37.063: INFO: namespace e2e-tests-projected-kw2tr deletion completed in 6.242487361s • [SLOW TEST:19.910 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 26 10:47:37.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 26 10:47:38.240: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 33.943392ms)
Jan 26 10:47:38.264: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.450235ms)
Jan 26 10:47:38.270: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.546474ms)
Jan 26 10:47:38.276: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.094322ms)
Jan 26 10:47:38.281: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.245323ms)
Jan 26 10:47:38.286: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.546508ms)
Jan 26 10:47:38.293: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.303317ms)
Jan 26 10:47:38.405: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 111.664111ms)
Jan 26 10:47:38.424: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.670975ms)
Jan 26 10:47:38.443: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.573877ms)
Jan 26 10:47:38.458: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.908127ms)
Jan 26 10:47:38.469: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.035107ms)
Jan 26 10:47:38.478: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.95569ms)
Jan 26 10:47:38.491: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.917262ms)
Jan 26 10:47:38.565: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 73.200675ms)
Jan 26 10:47:38.623: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 57.80287ms)
Jan 26 10:47:38.678: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 54.668683ms)
Jan 26 10:47:38.734: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 56.133133ms)
Jan 26 10:47:38.784: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 49.656405ms)
Jan 26 10:47:38.798: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.382135ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:47:38.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-5ggq8" for this suite.
Jan 26 10:47:44.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:47:45.029: INFO: namespace: e2e-tests-proxy-5ggq8, resource: bindings, ignored listing per whitelist
Jan 26 10:47:45.130: INFO: namespace e2e-tests-proxy-5ggq8 deletion completed in 6.321829659s

• [SLOW TEST:8.067 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:47:45.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6w7cz
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan 26 10:47:45.363: INFO: Found 0 stateful pods, waiting for 3
Jan 26 10:47:55.376: INFO: Found 1 stateful pods, waiting for 3
Jan 26 10:48:05.379: INFO: Found 2 stateful pods, waiting for 3
Jan 26 10:48:15.644: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 10:48:15.644: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 10:48:15.644: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 26 10:48:25.377: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 10:48:25.377: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 10:48:25.377: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 10:48:25.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6w7cz ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 10:48:26.125: INFO: stderr: "I0126 10:48:25.597920      40 log.go:172] (0xc00016c0b0) (0xc000257040) Create stream\nI0126 10:48:25.598058      40 log.go:172] (0xc00016c0b0) (0xc000257040) Stream added, broadcasting: 1\nI0126 10:48:25.602145      40 log.go:172] (0xc00016c0b0) Reply frame received for 1\nI0126 10:48:25.602196      40 log.go:172] (0xc00016c0b0) (0xc0002e0000) Create stream\nI0126 10:48:25.602214      40 log.go:172] (0xc00016c0b0) (0xc0002e0000) Stream added, broadcasting: 3\nI0126 10:48:25.603488      40 log.go:172] (0xc00016c0b0) Reply frame received for 3\nI0126 10:48:25.603521      40 log.go:172] (0xc00016c0b0) (0xc0002570e0) Create stream\nI0126 10:48:25.603535      40 log.go:172] (0xc00016c0b0) (0xc0002570e0) Stream added, broadcasting: 5\nI0126 10:48:25.604632      40 log.go:172] (0xc00016c0b0) Reply frame received for 5\nI0126 10:48:25.964681      40 log.go:172] (0xc00016c0b0) Data frame received for 3\nI0126 10:48:25.964742      40 log.go:172] (0xc0002e0000) (3) Data frame handling\nI0126 10:48:25.964754      40 log.go:172] (0xc0002e0000) (3) Data frame sent\nI0126 10:48:26.114461      40 log.go:172] (0xc00016c0b0) (0xc0002e0000) Stream removed, broadcasting: 3\nI0126 10:48:26.114696      40 log.go:172] (0xc00016c0b0) Data frame received for 1\nI0126 10:48:26.114723      40 log.go:172] (0xc000257040) (1) Data frame handling\nI0126 10:48:26.114749      40 log.go:172] (0xc000257040) (1) Data frame sent\nI0126 10:48:26.114768      40 log.go:172] (0xc00016c0b0) (0xc000257040) Stream removed, broadcasting: 1\nI0126 10:48:26.114835      40 log.go:172] (0xc00016c0b0) (0xc0002570e0) Stream removed, broadcasting: 5\nI0126 10:48:26.114893      40 log.go:172] (0xc00016c0b0) Go away received\nI0126 10:48:26.115139      40 log.go:172] (0xc00016c0b0) (0xc000257040) Stream removed, broadcasting: 1\nI0126 10:48:26.115159      40 log.go:172] (0xc00016c0b0) (0xc0002e0000) Stream removed, broadcasting: 3\nI0126 10:48:26.115174      40 log.go:172] (0xc00016c0b0) (0xc0002570e0) Stream removed, broadcasting: 5\n"
Jan 26 10:48:26.125: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 10:48:26.125: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 26 10:48:36.275: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 26 10:48:46.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6w7cz ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 10:48:46.955: INFO: stderr: "I0126 10:48:46.668411      62 log.go:172] (0xc00015c580) (0xc000631400) Create stream\nI0126 10:48:46.668524      62 log.go:172] (0xc00015c580) (0xc000631400) Stream added, broadcasting: 1\nI0126 10:48:46.673420      62 log.go:172] (0xc00015c580) Reply frame received for 1\nI0126 10:48:46.673486      62 log.go:172] (0xc00015c580) (0xc0006314a0) Create stream\nI0126 10:48:46.673522      62 log.go:172] (0xc00015c580) (0xc0006314a0) Stream added, broadcasting: 3\nI0126 10:48:46.674717      62 log.go:172] (0xc00015c580) Reply frame received for 3\nI0126 10:48:46.674865      62 log.go:172] (0xc00015c580) (0xc0004f4000) Create stream\nI0126 10:48:46.674883      62 log.go:172] (0xc00015c580) (0xc0004f4000) Stream added, broadcasting: 5\nI0126 10:48:46.680003      62 log.go:172] (0xc00015c580) Reply frame received for 5\nI0126 10:48:46.817323      62 log.go:172] (0xc00015c580) Data frame received for 3\nI0126 10:48:46.817410      62 log.go:172] (0xc0006314a0) (3) Data frame handling\nI0126 10:48:46.817454      62 log.go:172] (0xc0006314a0) (3) Data frame sent\nI0126 10:48:46.945432      62 log.go:172] (0xc00015c580) Data frame received for 1\nI0126 10:48:46.945663      62 log.go:172] (0xc000631400) (1) Data frame handling\nI0126 10:48:46.945732      62 log.go:172] (0xc000631400) (1) Data frame sent\nI0126 10:48:46.945809      62 log.go:172] (0xc00015c580) (0xc000631400) Stream removed, broadcasting: 1\nI0126 10:48:46.946523      62 log.go:172] (0xc00015c580) (0xc0004f4000) Stream removed, broadcasting: 5\nI0126 10:48:46.946819      62 log.go:172] (0xc00015c580) (0xc0006314a0) Stream removed, broadcasting: 3\nI0126 10:48:46.947037      62 log.go:172] (0xc00015c580) Go away received\nI0126 10:48:46.947173      62 log.go:172] (0xc00015c580) (0xc000631400) Stream removed, broadcasting: 1\nI0126 10:48:46.947274      62 log.go:172] (0xc00015c580) (0xc0006314a0) Stream removed, broadcasting: 3\nI0126 10:48:46.947318      62 log.go:172] (0xc00015c580) (0xc0004f4000) Stream removed, broadcasting: 5\n"
Jan 26 10:48:46.955: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 10:48:46.955: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 10:48:57.098: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:48:57.098: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 10:48:57.098: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 10:49:07.123: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:49:07.123: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 10:49:07.123: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 10:49:17.124: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:49:17.124: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 10:49:27.241: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:49:37.499: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 26 10:49:47.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6w7cz ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 10:49:47.656: INFO: stderr: "I0126 10:49:47.300530      84 log.go:172] (0xc000138630) (0xc0007425a0) Create stream\nI0126 10:49:47.300826      84 log.go:172] (0xc000138630) (0xc0007425a0) Stream added, broadcasting: 1\nI0126 10:49:47.306406      84 log.go:172] (0xc000138630) Reply frame received for 1\nI0126 10:49:47.306456      84 log.go:172] (0xc000138630) (0xc00063edc0) Create stream\nI0126 10:49:47.306474      84 log.go:172] (0xc000138630) (0xc00063edc0) Stream added, broadcasting: 3\nI0126 10:49:47.307801      84 log.go:172] (0xc000138630) Reply frame received for 3\nI0126 10:49:47.307832      84 log.go:172] (0xc000138630) (0xc0008900a0) Create stream\nI0126 10:49:47.307843      84 log.go:172] (0xc000138630) (0xc0008900a0) Stream added, broadcasting: 5\nI0126 10:49:47.309217      84 log.go:172] (0xc000138630) Reply frame received for 5\nI0126 10:49:47.506658      84 log.go:172] (0xc000138630) Data frame received for 3\nI0126 10:49:47.506709      84 log.go:172] (0xc00063edc0) (3) Data frame handling\nI0126 10:49:47.506732      84 log.go:172] (0xc00063edc0) (3) Data frame sent\nI0126 10:49:47.646412      84 log.go:172] (0xc000138630) Data frame received for 1\nI0126 10:49:47.646634      84 log.go:172] (0xc000138630) (0xc00063edc0) Stream removed, broadcasting: 3\nI0126 10:49:47.646733      84 log.go:172] (0xc0007425a0) (1) Data frame handling\nI0126 10:49:47.646757      84 log.go:172] (0xc0007425a0) (1) Data frame sent\nI0126 10:49:47.646839      84 log.go:172] (0xc000138630) (0xc0007425a0) Stream removed, broadcasting: 1\nI0126 10:49:47.647019      84 log.go:172] (0xc000138630) (0xc0008900a0) Stream removed, broadcasting: 5\nI0126 10:49:47.647082      84 log.go:172] (0xc000138630) Go away received\nI0126 10:49:47.647216      84 log.go:172] (0xc000138630) (0xc0007425a0) Stream removed, broadcasting: 1\nI0126 10:49:47.647233      84 log.go:172] (0xc000138630) (0xc00063edc0) Stream removed, broadcasting: 3\nI0126 10:49:47.647245      84 log.go:172] (0xc000138630) (0xc0008900a0) Stream removed, broadcasting: 5\n"
Jan 26 10:49:47.657: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 10:49:47.657: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 10:49:57.764: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 26 10:50:07.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6w7cz ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 10:50:08.385: INFO: stderr: "I0126 10:50:08.011700     107 log.go:172] (0xc0006624d0) (0xc000576820) Create stream\nI0126 10:50:08.011822     107 log.go:172] (0xc0006624d0) (0xc000576820) Stream added, broadcasting: 1\nI0126 10:50:08.029197     107 log.go:172] (0xc0006624d0) Reply frame received for 1\nI0126 10:50:08.029279     107 log.go:172] (0xc0006624d0) (0xc0005a2000) Create stream\nI0126 10:50:08.029300     107 log.go:172] (0xc0006624d0) (0xc0005a2000) Stream added, broadcasting: 3\nI0126 10:50:08.030639     107 log.go:172] (0xc0006624d0) Reply frame received for 3\nI0126 10:50:08.030707     107 log.go:172] (0xc0006624d0) (0xc0002aabe0) Create stream\nI0126 10:50:08.030723     107 log.go:172] (0xc0006624d0) (0xc0002aabe0) Stream added, broadcasting: 5\nI0126 10:50:08.031937     107 log.go:172] (0xc0006624d0) Reply frame received for 5\nI0126 10:50:08.163796     107 log.go:172] (0xc0006624d0) Data frame received for 3\nI0126 10:50:08.163990     107 log.go:172] (0xc0005a2000) (3) Data frame handling\nI0126 10:50:08.164095     107 log.go:172] (0xc0005a2000) (3) Data frame sent\nI0126 10:50:08.367410     107 log.go:172] (0xc0006624d0) (0xc0005a2000) Stream removed, broadcasting: 3\nI0126 10:50:08.367910     107 log.go:172] (0xc0006624d0) Data frame received for 1\nI0126 10:50:08.367985     107 log.go:172] (0xc000576820) (1) Data frame handling\nI0126 10:50:08.368049     107 log.go:172] (0xc000576820) (1) Data frame sent\nI0126 10:50:08.368156     107 log.go:172] (0xc0006624d0) (0xc000576820) Stream removed, broadcasting: 1\nI0126 10:50:08.368214     107 log.go:172] (0xc0006624d0) (0xc0002aabe0) Stream removed, broadcasting: 5\nI0126 10:50:08.368342     107 log.go:172] (0xc0006624d0) Go away received\nI0126 10:50:08.369870     107 log.go:172] (0xc0006624d0) (0xc000576820) Stream removed, broadcasting: 1\nI0126 10:50:08.369898     107 log.go:172] (0xc0006624d0) (0xc0005a2000) Stream removed, broadcasting: 3\nI0126 10:50:08.369908     107 log.go:172] (0xc0006624d0) (0xc0002aabe0) Stream removed, broadcasting: 5\n"
Jan 26 10:50:08.385: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 10:50:08.385: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 10:50:08.657: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:50:08.657: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 26 10:50:08.657: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 26 10:50:08.657: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 26 10:50:18.682: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:50:18.682: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 26 10:50:18.682: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 26 10:50:28.676: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:50:28.676: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 26 10:50:28.676: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 26 10:50:38.768: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:50:38.768: INFO: Waiting for Pod e2e-tests-statefulset-6w7cz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 26 10:50:49.871: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
Jan 26 10:50:58.668: INFO: Waiting for StatefulSet e2e-tests-statefulset-6w7cz/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 26 10:51:08.675: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6w7cz
Jan 26 10:51:08.682: INFO: Scaling statefulset ss2 to 0
Jan 26 10:51:38.724: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 10:51:38.733: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:51:38.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6w7cz" for this suite.
Jan 26 10:51:46.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:51:46.956: INFO: namespace: e2e-tests-statefulset-6w7cz, resource: bindings, ignored listing per whitelist
Jan 26 10:51:46.981: INFO: namespace e2e-tests-statefulset-6w7cz deletion completed in 8.211528505s

• [SLOW TEST:241.851 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:51:46.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 26 10:51:47.101: INFO: Waiting up to 5m0s for pod "pod-d940841a-4029-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-sjzlr" to be "success or failure"
Jan 26 10:51:47.169: INFO: Pod "pod-d940841a-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 67.31856ms
Jan 26 10:51:49.260: INFO: Pod "pod-d940841a-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158466875s
Jan 26 10:51:51.309: INFO: Pod "pod-d940841a-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207506047s
Jan 26 10:51:53.319: INFO: Pod "pod-d940841a-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217652897s
Jan 26 10:51:55.440: INFO: Pod "pod-d940841a-4029-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338751873s
Jan 26 10:51:57.672: INFO: Pod "pod-d940841a-4029-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.570695544s
STEP: Saw pod success
Jan 26 10:51:57.672: INFO: Pod "pod-d940841a-4029-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 10:51:57.685: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d940841a-4029-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 10:51:57.893: INFO: Waiting for pod pod-d940841a-4029-11ea-b664-0242ac110005 to disappear
Jan 26 10:51:58.033: INFO: Pod pod-d940841a-4029-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:51:58.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sjzlr" for this suite.
Jan 26 10:52:04.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:52:04.148: INFO: namespace: e2e-tests-emptydir-sjzlr, resource: bindings, ignored listing per whitelist
Jan 26 10:52:04.200: INFO: namespace e2e-tests-emptydir-sjzlr deletion completed in 6.158748655s

• [SLOW TEST:17.219 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:52:04.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-pzxjq
Jan 26 10:52:14.412: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-pzxjq
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 10:52:14.416: INFO: Initial restart count of pod liveness-http is 0
Jan 26 10:52:32.736: INFO: Restart count of pod e2e-tests-container-probe-pzxjq/liveness-http is now 1 (18.319126705s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:52:32.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pzxjq" for this suite.
Jan 26 10:52:39.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:52:39.954: INFO: namespace: e2e-tests-container-probe-pzxjq, resource: bindings, ignored listing per whitelist
Jan 26 10:52:39.954: INFO: namespace e2e-tests-container-probe-pzxjq deletion completed in 7.122986174s

• [SLOW TEST:35.754 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:52:39.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 10:52:40.328: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 26 10:52:45.746: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 26 10:52:49.772: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 26 10:52:51.797: INFO: Creating deployment "test-rollover-deployment"
Jan 26 10:52:51.853: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 26 10:52:53.971: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 26 10:52:53.982: INFO: Ensure that both replica sets have 1 created replica
Jan 26 10:52:53.988: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 26 10:52:54.000: INFO: Updating deployment test-rollover-deployment
Jan 26 10:52:54.000: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 26 10:52:57.374: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 26 10:52:58.312: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 26 10:52:58.473: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:52:58.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632774, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:00.517: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:53:00.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632774, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:05.074: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:53:05.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632774, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:06.504: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:53:06.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632774, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:08.501: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:53:08.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632787, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:10.523: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:53:10.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632787, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:12.504: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:53:12.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632787, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:14.504: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:53:14.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632787, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:16.500: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 10:53:16.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632772, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632787, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632771, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:18.509: INFO: 
Jan 26 10:53:18.509: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 26 10:53:18.526: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-6d2wn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6d2wn/deployments/test-rollover-deployment,UID:ffd29688-4029-11ea-a994-fa163e34d433,ResourceVersion:19510502,Generation:2,CreationTimestamp:2020-01-26 10:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-26 10:52:52 +0000 UTC 2020-01-26 10:52:52 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-26 10:53:17 +0000 UTC 2020-01-26 10:52:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 26 10:53:18.531: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-6d2wn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6d2wn/replicasets/test-rollover-deployment-5b8479fdb6,UID:0122d054-402a-11ea-a994-fa163e34d433,ResourceVersion:19510491,Generation:2,CreationTimestamp:2020-01-26 10:52:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ffd29688-4029-11ea-a994-fa163e34d433 0xc001db44d7 0xc001db44d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 26 10:53:18.531: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 26 10:53:18.531: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-6d2wn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6d2wn/replicasets/test-rollover-controller,UID:f8e1881b-4029-11ea-a994-fa163e34d433,ResourceVersion:19510501,Generation:2,CreationTimestamp:2020-01-26 10:52:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ffd29688-4029-11ea-a994-fa163e34d433 0xc001db4347 0xc001db4348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 26 10:53:18.531: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-6d2wn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6d2wn/replicasets/test-rollover-deployment-58494b7559,UID:ffe19540-4029-11ea-a994-fa163e34d433,ResourceVersion:19510460,Generation:2,CreationTimestamp:2020-01-26 10:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ffd29688-4029-11ea-a994-fa163e34d433 0xc001db4407 0xc001db4408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 26 10:53:18.542: INFO: Pod "test-rollover-deployment-5b8479fdb6-2mp94" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-2mp94,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-6d2wn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6d2wn/pods/test-rollover-deployment-5b8479fdb6-2mp94,UID:01485714-402a-11ea-a994-fa163e34d433,ResourceVersion:19510477,Generation:0,CreationTimestamp:2020-01-26 10:52:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 0122d054-402a-11ea-a994-fa163e34d433 0xc001db5087 0xc001db5088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qfxbk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qfxbk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qfxbk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001db50f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001db5110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 10:52:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 10:53:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 10:53:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 10:52:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-26 10:52:54 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-26 10:53:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://dcb9d33eb10119dad7b70c957bf9407bc2d80a9dd8b4d3a03a8576d5836543da}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:53:18.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6d2wn" for this suite.
Jan 26 10:53:26.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:53:26.974: INFO: namespace: e2e-tests-deployment-6d2wn, resource: bindings, ignored listing per whitelist
Jan 26 10:53:27.088: INFO: namespace e2e-tests-deployment-6d2wn deletion completed in 8.53896836s

• [SLOW TEST:47.134 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:53:27.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 10:53:27.385: INFO: Creating deployment "test-recreate-deployment"
Jan 26 10:53:27.474: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 26 10:53:27.485: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 26 10:53:30.285: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 26 10:53:30.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632809, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:32.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632809, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:34.323: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632809, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:36.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632809, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:39.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632809, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:40.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632809, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715632807, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 10:53:42.331: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 26 10:53:42.347: INFO: Updating deployment test-recreate-deployment
Jan 26 10:53:42.347: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 26 10:53:43.064: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-jk9gf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jk9gf/deployments/test-recreate-deployment,UID:15097b05-402a-11ea-a994-fa163e34d433,ResourceVersion:19510602,Generation:2,CreationTimestamp:2020-01-26 10:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-26 10:53:42 +0000 UTC 2020-01-26 10:53:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-26 10:53:43 +0000 UTC 2020-01-26 10:53:27 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 26 10:53:43.073: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-jk9gf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jk9gf/replicasets/test-recreate-deployment-589c4bfd,UID:1e312b8d-402a-11ea-a994-fa163e34d433,ResourceVersion:19510600,Generation:1,CreationTimestamp:2020-01-26 10:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 15097b05-402a-11ea-a994-fa163e34d433 0xc0012697df 0xc001269810}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 26 10:53:43.073: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 26 10:53:43.073: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-jk9gf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jk9gf/replicasets/test-recreate-deployment-5bf7f65dc,UID:1516bfcc-402a-11ea-a994-fa163e34d433,ResourceVersion:19510590,Generation:2,CreationTimestamp:2020-01-26 10:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 15097b05-402a-11ea-a994-fa163e34d433 0xc001269980 0xc001269981}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 26 10:53:43.079: INFO: Pod "test-recreate-deployment-589c4bfd-tnbrt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-tnbrt,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-jk9gf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jk9gf/pods/test-recreate-deployment-589c4bfd-tnbrt,UID:1e34ba08-402a-11ea-a994-fa163e34d433,ResourceVersion:19510601,Generation:0,CreationTimestamp:2020-01-26 10:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 1e312b8d-402a-11ea-a994-fa163e34d433 0xc0011da90f 0xc0011da940}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zs924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zs924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zs924 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011daa00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011daa20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 10:53:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 10:53:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 10:53:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 10:53:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-26 10:53:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:53:43.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-jk9gf" for this suite.
Jan 26 10:53:53.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:53:53.224: INFO: namespace: e2e-tests-deployment-jk9gf, resource: bindings, ignored listing per whitelist
Jan 26 10:53:53.270: INFO: namespace e2e-tests-deployment-jk9gf deletion completed in 10.186059473s

• [SLOW TEST:26.181 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:53:53.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-2496c686-402a-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 10:53:53.511: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-sqndl" to be "success or failure"
Jan 26 10:53:53.531: INFO: Pod "pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.977031ms
Jan 26 10:53:55.558: INFO: Pod "pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046715906s
Jan 26 10:53:57.570: INFO: Pod "pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059387284s
Jan 26 10:53:59.625: INFO: Pod "pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113670486s
Jan 26 10:54:02.295: INFO: Pod "pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.783837263s
Jan 26 10:54:04.309: INFO: Pod "pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.798480592s
STEP: Saw pod success
Jan 26 10:54:04.310: INFO: Pod "pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 10:54:04.322: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 10:54:05.140: INFO: Waiting for pod pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005 to disappear
Jan 26 10:54:05.150: INFO: Pod pod-projected-configmaps-2498a34f-402a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:54:05.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sqndl" for this suite.
Jan 26 10:54:11.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:54:11.335: INFO: namespace: e2e-tests-projected-sqndl, resource: bindings, ignored listing per whitelist
Jan 26 10:54:11.382: INFO: namespace e2e-tests-projected-sqndl deletion completed in 6.21440295s

• [SLOW TEST:18.111 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:54:11.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-2f6886c0-402a-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 10:54:11.710: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-gsc2v" to be "success or failure"
Jan 26 10:54:11.776: INFO: Pod "pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 66.469202ms
Jan 26 10:54:13.973: INFO: Pod "pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263038707s
Jan 26 10:54:16.001: INFO: Pod "pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291403835s
Jan 26 10:54:18.013: INFO: Pod "pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303253738s
Jan 26 10:54:20.404: INFO: Pod "pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.694791614s
Jan 26 10:54:22.421: INFO: Pod "pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.711561555s
STEP: Saw pod success
Jan 26 10:54:22.421: INFO: Pod "pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 10:54:22.432: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 10:54:22.710: INFO: Waiting for pod pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005 to disappear
Jan 26 10:54:22.807: INFO: Pod pod-projected-secrets-2f6f60ce-402a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:54:22.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gsc2v" for this suite.
Jan 26 10:54:28.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:54:28.874: INFO: namespace: e2e-tests-projected-gsc2v, resource: bindings, ignored listing per whitelist
Jan 26 10:54:28.988: INFO: namespace e2e-tests-projected-gsc2v deletion completed in 6.17699847s

• [SLOW TEST:17.605 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:54:28.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-39d0d5b4-402a-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 10:54:29.191: INFO: Waiting up to 5m0s for pod "pod-configmaps-39d22657-402a-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-s8dn5" to be "success or failure"
Jan 26 10:54:29.201: INFO: Pod "pod-configmaps-39d22657-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.044763ms
Jan 26 10:54:32.352: INFO: Pod "pod-configmaps-39d22657-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.160801568s
Jan 26 10:54:34.376: INFO: Pod "pod-configmaps-39d22657-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.184427963s
Jan 26 10:54:36.597: INFO: Pod "pod-configmaps-39d22657-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.405431705s
Jan 26 10:54:38.620: INFO: Pod "pod-configmaps-39d22657-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.429072863s
Jan 26 10:54:40.654: INFO: Pod "pod-configmaps-39d22657-402a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.463065619s
STEP: Saw pod success
Jan 26 10:54:40.654: INFO: Pod "pod-configmaps-39d22657-402a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 10:54:40.658: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-39d22657-402a-11ea-b664-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 26 10:54:40.805: INFO: Waiting for pod pod-configmaps-39d22657-402a-11ea-b664-0242ac110005 to disappear
Jan 26 10:54:40.850: INFO: Pod pod-configmaps-39d22657-402a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:54:40.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-s8dn5" for this suite.
Jan 26 10:54:46.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:54:47.009: INFO: namespace: e2e-tests-configmap-s8dn5, resource: bindings, ignored listing per whitelist
Jan 26 10:54:47.054: INFO: namespace e2e-tests-configmap-s8dn5 deletion completed in 6.19449505s

• [SLOW TEST:18.067 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:54:47.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 26 10:55:01.146: INFO: Successfully updated pod "annotationupdate45caaf0c-402a-11ea-b664-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:55:03.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5s89z" for this suite.
Jan 26 10:55:27.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:55:27.435: INFO: namespace: e2e-tests-downward-api-5s89z, resource: bindings, ignored listing per whitelist
Jan 26 10:55:27.549: INFO: namespace e2e-tests-downward-api-5s89z deletion completed in 24.240859129s

• [SLOW TEST:40.495 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:55:27.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 26 10:55:38.337: INFO: Successfully updated pod "pod-update-5cc02107-402a-11ea-b664-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan 26 10:55:38.354: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:55:38.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hsxxj" for this suite.
Jan 26 10:56:18.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:56:18.596: INFO: namespace: e2e-tests-pods-hsxxj, resource: bindings, ignored listing per whitelist
Jan 26 10:56:18.641: INFO: namespace e2e-tests-pods-hsxxj deletion completed in 40.281404615s

• [SLOW TEST:51.092 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:56:18.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-9g6n8
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-9g6n8
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-9g6n8
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-9g6n8
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-9g6n8
Jan 26 10:56:35.030: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-9g6n8, name: ss-0, uid: 838821db-402a-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan 26 10:56:42.471: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-9g6n8, name: ss-0, uid: 838821db-402a-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 26 10:56:42.579: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-9g6n8, name: ss-0, uid: 838821db-402a-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 26 10:56:42.607: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-9g6n8
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-9g6n8
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-9g6n8 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 26 10:56:55.923: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9g6n8
Jan 26 10:56:55.930: INFO: Scaling statefulset ss to 0
Jan 26 10:57:05.996: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 10:57:06.003: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:57:06.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-9g6n8" for this suite.
Jan 26 10:57:14.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:57:14.880: INFO: namespace: e2e-tests-statefulset-9g6n8, resource: bindings, ignored listing per whitelist
Jan 26 10:57:14.987: INFO: namespace e2e-tests-statefulset-9g6n8 deletion completed in 8.912886987s

• [SLOW TEST:56.345 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:57:14.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 26 10:57:15.178: INFO: Waiting up to 5m0s for pod "pod-9ccbace3-402a-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-55ftc" to be "success or failure"
Jan 26 10:57:15.206: INFO: Pod "pod-9ccbace3-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.839957ms
Jan 26 10:57:17.286: INFO: Pod "pod-9ccbace3-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108116383s
Jan 26 10:57:19.298: INFO: Pod "pod-9ccbace3-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119801864s
Jan 26 10:57:21.453: INFO: Pod "pod-9ccbace3-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27509259s
Jan 26 10:57:23.492: INFO: Pod "pod-9ccbace3-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313447092s
Jan 26 10:57:25.503: INFO: Pod "pod-9ccbace3-402a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324849924s
STEP: Saw pod success
Jan 26 10:57:25.503: INFO: Pod "pod-9ccbace3-402a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 10:57:25.511: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9ccbace3-402a-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 10:57:28.467: INFO: Waiting for pod pod-9ccbace3-402a-11ea-b664-0242ac110005 to disappear
Jan 26 10:57:28.555: INFO: Pod pod-9ccbace3-402a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:57:28.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-55ftc" for this suite.
Jan 26 10:57:34.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:57:34.684: INFO: namespace: e2e-tests-emptydir-55ftc, resource: bindings, ignored listing per whitelist
Jan 26 10:57:34.722: INFO: namespace e2e-tests-emptydir-55ftc deletion completed in 6.155612482s

• [SLOW TEST:19.735 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:57:34.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 10:57:34.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-tk254" to be "success or failure"
Jan 26 10:57:34.970: INFO: Pod "downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.848559ms
Jan 26 10:57:37.487: INFO: Pod "downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.542390726s
Jan 26 10:57:39.494: INFO: Pod "downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.549178455s
Jan 26 10:57:41.502: INFO: Pod "downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.557534987s
Jan 26 10:57:43.611: INFO: Pod "downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.666076324s
Jan 26 10:57:45.624: INFO: Pod "downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.679456592s
STEP: Saw pod success
Jan 26 10:57:45.624: INFO: Pod "downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 10:57:45.802: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 10:57:45.876: INFO: Waiting for pod downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005 to disappear
Jan 26 10:57:46.018: INFO: Pod downwardapi-volume-a894a798-402a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:57:46.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tk254" for this suite.
Jan 26 10:57:52.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:57:52.080: INFO: namespace: e2e-tests-downward-api-tk254, resource: bindings, ignored listing per whitelist
Jan 26 10:57:52.163: INFO: namespace e2e-tests-downward-api-tk254 deletion completed in 6.135151632s

• [SLOW TEST:17.441 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:57:52.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b2ea774d-402a-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 10:57:52.334: INFO: Waiting up to 5m0s for pod "pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-lwg8n" to be "success or failure"
Jan 26 10:57:52.348: INFO: Pod "pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.796893ms
Jan 26 10:57:54.375: INFO: Pod "pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040830777s
Jan 26 10:57:56.948: INFO: Pod "pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.61405678s
Jan 26 10:57:59.892: INFO: Pod "pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.557791979s
Jan 26 10:58:01.932: INFO: Pod "pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.597509136s
STEP: Saw pod success
Jan 26 10:58:01.932: INFO: Pod "pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 10:58:01.942: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 26 10:58:02.531: INFO: Waiting for pod pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005 to disappear
Jan 26 10:58:02.573: INFO: Pod pod-secrets-b2f227e0-402a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:58:02.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lwg8n" for this suite.
Jan 26 10:58:10.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:58:10.799: INFO: namespace: e2e-tests-secrets-lwg8n, resource: bindings, ignored listing per whitelist
Jan 26 10:58:10.910: INFO: namespace e2e-tests-secrets-lwg8n deletion completed in 8.31313728s

• [SLOW TEST:18.747 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:58:10.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-be300052-402a-11ea-b664-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-be300030-402a-11ea-b664-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 26 10:58:11.273: INFO: Waiting up to 5m0s for pod "projected-volume-be2fff97-402a-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-zbfx5" to be "success or failure"
Jan 26 10:58:11.337: INFO: Pod "projected-volume-be2fff97-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.226582ms
Jan 26 10:58:13.522: INFO: Pod "projected-volume-be2fff97-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248375416s
Jan 26 10:58:15.531: INFO: Pod "projected-volume-be2fff97-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258272783s
Jan 26 10:58:17.560: INFO: Pod "projected-volume-be2fff97-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286963098s
Jan 26 10:58:19.577: INFO: Pod "projected-volume-be2fff97-402a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.30409498s
Jan 26 10:58:21.595: INFO: Pod "projected-volume-be2fff97-402a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321820503s
STEP: Saw pod success
Jan 26 10:58:21.595: INFO: Pod "projected-volume-be2fff97-402a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 10:58:21.602: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-be2fff97-402a-11ea-b664-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan 26 10:58:21.678: INFO: Waiting for pod projected-volume-be2fff97-402a-11ea-b664-0242ac110005 to disappear
Jan 26 10:58:21.700: INFO: Pod projected-volume-be2fff97-402a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:58:21.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zbfx5" for this suite.
Jan 26 10:58:27.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 10:58:27.852: INFO: namespace: e2e-tests-projected-zbfx5, resource: bindings, ignored listing per whitelist
Jan 26 10:58:27.950: INFO: namespace e2e-tests-projected-zbfx5 deletion completed in 6.244350755s

• [SLOW TEST:17.040 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 10:58:27.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-c84df9ac-402a-11ea-b664-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-c84df9ac-402a-11ea-b664-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 10:59:50.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lwwhn" for this suite.
Jan 26 11:00:14.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:00:15.005: INFO: namespace: e2e-tests-projected-lwwhn, resource: bindings, ignored listing per whitelist
Jan 26 11:00:15.076: INFO: namespace e2e-tests-projected-lwwhn deletion completed in 24.162801316s

• [SLOW TEST:107.126 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:00:15.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 26 11:00:25.939: INFO: Successfully updated pod "labelsupdate082e6795-402b-11ea-b664-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:00:28.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9lll4" for this suite.
Jan 26 11:00:52.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:00:52.142: INFO: namespace: e2e-tests-downward-api-9lll4, resource: bindings, ignored listing per whitelist
Jan 26 11:00:52.227: INFO: namespace e2e-tests-downward-api-9lll4 deletion completed in 24.185315947s

• [SLOW TEST:37.151 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:00:52.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 26 11:00:52.431: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m8kbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8kbh/configmaps/e2e-watch-test-watch-closed,UID:1e3bf326-402b-11ea-a994-fa163e34d433,ResourceVersion:19511531,Generation:0,CreationTimestamp:2020-01-26 11:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 26 11:00:52.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m8kbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8kbh/configmaps/e2e-watch-test-watch-closed,UID:1e3bf326-402b-11ea-a994-fa163e34d433,ResourceVersion:19511532,Generation:0,CreationTimestamp:2020-01-26 11:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 26 11:00:52.471: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m8kbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8kbh/configmaps/e2e-watch-test-watch-closed,UID:1e3bf326-402b-11ea-a994-fa163e34d433,ResourceVersion:19511533,Generation:0,CreationTimestamp:2020-01-26 11:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 11:00:52.472: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m8kbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8kbh/configmaps/e2e-watch-test-watch-closed,UID:1e3bf326-402b-11ea-a994-fa163e34d433,ResourceVersion:19511534,Generation:0,CreationTimestamp:2020-01-26 11:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:00:52.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-m8kbh" for this suite.
Jan 26 11:00:58.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:00:58.557: INFO: namespace: e2e-tests-watch-m8kbh, resource: bindings, ignored listing per whitelist
Jan 26 11:00:58.705: INFO: namespace e2e-tests-watch-m8kbh deletion completed in 6.227913031s

• [SLOW TEST:6.477 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:00:58.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-795s
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 11:00:59.013: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-795s" in namespace "e2e-tests-subpath-6w2hp" to be "success or failure"
Jan 26 11:00:59.024: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 11.215521ms
Jan 26 11:01:03.135: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121933353s
Jan 26 11:01:05.201: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18768494s
Jan 26 11:01:07.217: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203657564s
Jan 26 11:01:09.237: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224128131s
Jan 26 11:01:11.245: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.231875121s
Jan 26 11:01:13.253: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.239913033s
Jan 26 11:01:15.262: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 16.248917096s
Jan 26 11:01:17.270: INFO: Pod "pod-subpath-test-projected-795s": Phase="Pending", Reason="", readiness=false. Elapsed: 18.257236557s
Jan 26 11:01:19.288: INFO: Pod "pod-subpath-test-projected-795s": Phase="Running", Reason="", readiness=false. Elapsed: 20.274783372s
Jan 26 11:01:21.301: INFO: Pod "pod-subpath-test-projected-795s": Phase="Running", Reason="", readiness=false. Elapsed: 22.287644929s
Jan 26 11:01:23.316: INFO: Pod "pod-subpath-test-projected-795s": Phase="Running", Reason="", readiness=false. Elapsed: 24.302910226s
Jan 26 11:01:25.326: INFO: Pod "pod-subpath-test-projected-795s": Phase="Running", Reason="", readiness=false. Elapsed: 26.312860886s
Jan 26 11:01:27.350: INFO: Pod "pod-subpath-test-projected-795s": Phase="Running", Reason="", readiness=false. Elapsed: 28.33731183s
Jan 26 11:01:29.378: INFO: Pod "pod-subpath-test-projected-795s": Phase="Running", Reason="", readiness=false. Elapsed: 30.364882044s
Jan 26 11:01:31.394: INFO: Pod "pod-subpath-test-projected-795s": Phase="Running", Reason="", readiness=false. Elapsed: 32.380533493s
Jan 26 11:01:33.408: INFO: Pod "pod-subpath-test-projected-795s": Phase="Running", Reason="", readiness=false. Elapsed: 34.395337896s
Jan 26 11:01:35.428: INFO: Pod "pod-subpath-test-projected-795s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.414715662s
STEP: Saw pod success
Jan 26 11:01:35.428: INFO: Pod "pod-subpath-test-projected-795s" satisfied condition "success or failure"
Jan 26 11:01:35.436: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-795s container test-container-subpath-projected-795s: 
STEP: delete the pod
Jan 26 11:01:35.727: INFO: Waiting for pod pod-subpath-test-projected-795s to disappear
Jan 26 11:01:35.786: INFO: Pod pod-subpath-test-projected-795s no longer exists
STEP: Deleting pod pod-subpath-test-projected-795s
Jan 26 11:01:35.787: INFO: Deleting pod "pod-subpath-test-projected-795s" in namespace "e2e-tests-subpath-6w2hp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:01:35.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-6w2hp" for this suite.
Jan 26 11:01:42.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:01:42.203: INFO: namespace: e2e-tests-subpath-6w2hp, resource: bindings, ignored listing per whitelist
Jan 26 11:01:42.221: INFO: namespace e2e-tests-subpath-6w2hp deletion completed in 6.238200785s

• [SLOW TEST:43.516 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:01:42.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 26 11:01:53.539: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:01:55.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-75b9s" for this suite.
Jan 26 11:02:21.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:02:21.468: INFO: namespace: e2e-tests-replicaset-75b9s, resource: bindings, ignored listing per whitelist
Jan 26 11:02:21.546: INFO: namespace e2e-tests-replicaset-75b9s deletion completed in 26.450808314s

• [SLOW TEST:39.325 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:02:21.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-ktfs
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 11:02:22.042: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ktfs" in namespace "e2e-tests-subpath-7kmjx" to be "success or failure"
Jan 26 11:02:22.051: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.698664ms
Jan 26 11:02:24.077: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034870422s
Jan 26 11:02:26.085: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042941576s
Jan 26 11:02:29.266: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 7.22385199s
Jan 26 11:02:31.333: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 9.291111216s
Jan 26 11:02:33.361: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 11.318522189s
Jan 26 11:02:35.390: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 13.347411801s
Jan 26 11:02:37.643: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 15.600466784s
Jan 26 11:02:39.649: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Pending", Reason="", readiness=false. Elapsed: 17.606476837s
Jan 26 11:02:41.657: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Running", Reason="", readiness=false. Elapsed: 19.614884065s
Jan 26 11:02:43.724: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Running", Reason="", readiness=false. Elapsed: 21.681396293s
Jan 26 11:02:45.733: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Running", Reason="", readiness=false. Elapsed: 23.691066263s
Jan 26 11:02:47.754: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Running", Reason="", readiness=false. Elapsed: 25.711796081s
Jan 26 11:02:49.912: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Running", Reason="", readiness=false. Elapsed: 27.870168573s
Jan 26 11:02:51.922: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Running", Reason="", readiness=false. Elapsed: 29.879991342s
Jan 26 11:02:53.956: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Running", Reason="", readiness=false. Elapsed: 31.913286955s
Jan 26 11:02:55.972: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Running", Reason="", readiness=false. Elapsed: 33.929628182s
Jan 26 11:02:57.993: INFO: Pod "pod-subpath-test-configmap-ktfs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.951100404s
STEP: Saw pod success
Jan 26 11:02:57.994: INFO: Pod "pod-subpath-test-configmap-ktfs" satisfied condition "success or failure"
Jan 26 11:02:58.006: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-ktfs container test-container-subpath-configmap-ktfs: 
STEP: delete the pod
Jan 26 11:02:58.166: INFO: Waiting for pod pod-subpath-test-configmap-ktfs to disappear
Jan 26 11:02:58.198: INFO: Pod pod-subpath-test-configmap-ktfs no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ktfs
Jan 26 11:02:58.198: INFO: Deleting pod "pod-subpath-test-configmap-ktfs" in namespace "e2e-tests-subpath-7kmjx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:02:58.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-7kmjx" for this suite.
Jan 26 11:03:04.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:03:04.366: INFO: namespace: e2e-tests-subpath-7kmjx, resource: bindings, ignored listing per whitelist
Jan 26 11:03:04.498: INFO: namespace e2e-tests-subpath-7kmjx deletion completed in 6.278763347s

• [SLOW TEST:42.952 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:03:04.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-6d18a47c-402b-11ea-b664-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-6d18a4c6-402b-11ea-b664-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6d18a47c-402b-11ea-b664-0242ac110005
STEP: Updating configmap cm-test-opt-upd-6d18a4c6-402b-11ea-b664-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-6d18a4de-402b-11ea-b664-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:04:48.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4xxcz" for this suite.
Jan 26 11:05:12.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:05:12.500: INFO: namespace: e2e-tests-projected-4xxcz, resource: bindings, ignored listing per whitelist
Jan 26 11:05:12.661: INFO: namespace e2e-tests-projected-4xxcz deletion completed in 24.260079148s

• [SLOW TEST:128.162 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:05:12.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 26 11:05:12.871: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 26 11:05:12.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:15.086: INFO: stderr: ""
Jan 26 11:05:15.086: INFO: stdout: "service/redis-slave created\n"
Jan 26 11:05:15.086: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 26 11:05:15.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:15.443: INFO: stderr: ""
Jan 26 11:05:15.443: INFO: stdout: "service/redis-master created\n"
Jan 26 11:05:15.444: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 26 11:05:15.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:16.109: INFO: stderr: ""
Jan 26 11:05:16.109: INFO: stdout: "service/frontend created\n"
Jan 26 11:05:16.109: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 26 11:05:16.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:16.538: INFO: stderr: ""
Jan 26 11:05:16.538: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 26 11:05:16.539: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 26 11:05:16.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:16.891: INFO: stderr: ""
Jan 26 11:05:16.891: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 26 11:05:16.892: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 26 11:05:16.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:17.497: INFO: stderr: ""
Jan 26 11:05:17.498: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 26 11:05:17.498: INFO: Waiting for all frontend pods to be Running.
Jan 26 11:05:47.550: INFO: Waiting for frontend to serve content.
Jan 26 11:05:47.640: INFO: Trying to add a new entry to the guestbook.
Jan 26 11:05:47.671: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 26 11:05:47.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:48.010: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:05:48.010: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 11:05:48.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:48.455: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:05:48.456: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 11:05:48.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:48.871: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:05:48.871: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 11:05:48.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:49.056: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:05:49.056: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 11:05:49.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:49.320: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:05:49.320: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 11:05:49.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-b4ztm'
Jan 26 11:05:49.654: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:05:49.655: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:05:49.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-b4ztm" for this suite.
Jan 26 11:06:33.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:06:34.016: INFO: namespace: e2e-tests-kubectl-b4ztm, resource: bindings, ignored listing per whitelist
Jan 26 11:06:34.098: INFO: namespace e2e-tests-kubectl-b4ztm deletion completed in 44.332807494s

• [SLOW TEST:81.438 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:06:34.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 26 11:06:34.307: INFO: PodSpec: initContainers in spec.initContainers
Jan 26 11:07:43.530: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ea1273f3-402b-11ea-b664-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-f2vlq", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-f2vlq/pods/pod-init-ea1273f3-402b-11ea-b664-0242ac110005", UID:"ea13ce12-402b-11ea-a994-fa163e34d433", ResourceVersion:"19512407", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715633594, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"307084115"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7lxx2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001023940), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7lxx2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7lxx2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7lxx2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000cd9aa8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001791080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000cd9b30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000cd9b80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000cd9b88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000cd9b8c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715633594, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715633594, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715633594, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715633594, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001191c20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0001612d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000161340)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://757a1638c3a1ddf59b284e1b18cce862ad96222a4372bf1adc1b1de2daad6c52"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001191d00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001191ce0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:07:43.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-f2vlq" for this suite.
Jan 26 11:08:07.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:08:07.727: INFO: namespace: e2e-tests-init-container-f2vlq, resource: bindings, ignored listing per whitelist
Jan 26 11:08:07.954: INFO: namespace e2e-tests-init-container-f2vlq deletion completed in 24.39118543s

• [SLOW TEST:93.855 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:08:07.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 26 11:08:18.810: INFO: Successfully updated pod "pod-update-activedeadlineseconds-21fa365b-402c-11ea-b664-0242ac110005"
Jan 26 11:08:18.811: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-21fa365b-402c-11ea-b664-0242ac110005" in namespace "e2e-tests-pods-zvkxl" to be "terminated due to deadline exceeded"
Jan 26 11:08:18.838: INFO: Pod "pod-update-activedeadlineseconds-21fa365b-402c-11ea-b664-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 27.222216ms
Jan 26 11:08:20.887: INFO: Pod "pod-update-activedeadlineseconds-21fa365b-402c-11ea-b664-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.076690413s
Jan 26 11:08:20.887: INFO: Pod "pod-update-activedeadlineseconds-21fa365b-402c-11ea-b664-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:08:20.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zvkxl" for this suite.
Jan 26 11:08:26.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:08:27.131: INFO: namespace: e2e-tests-pods-zvkxl, resource: bindings, ignored listing per whitelist
Jan 26 11:08:27.239: INFO: namespace e2e-tests-pods-zvkxl deletion completed in 6.341154099s

• [SLOW TEST:19.285 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:08:27.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:09:20.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-8mtgz" for this suite.
Jan 26 11:09:26.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:09:26.868: INFO: namespace: e2e-tests-container-runtime-8mtgz, resource: bindings, ignored listing per whitelist
Jan 26 11:09:26.971: INFO: namespace e2e-tests-container-runtime-8mtgz deletion completed in 6.473565048s

• [SLOW TEST:59.731 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:09:26.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 26 11:09:27.358: INFO: Waiting up to 5m0s for pod "client-containers-5134e9a9-402c-11ea-b664-0242ac110005" in namespace "e2e-tests-containers-xjqpx" to be "success or failure"
Jan 26 11:09:27.371: INFO: Pod "client-containers-5134e9a9-402c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.503853ms
Jan 26 11:09:29.699: INFO: Pod "client-containers-5134e9a9-402c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341493068s
Jan 26 11:09:31.734: INFO: Pod "client-containers-5134e9a9-402c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.376352789s
Jan 26 11:09:33.825: INFO: Pod "client-containers-5134e9a9-402c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467833541s
Jan 26 11:09:35.841: INFO: Pod "client-containers-5134e9a9-402c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.483514641s
Jan 26 11:09:37.872: INFO: Pod "client-containers-5134e9a9-402c-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.514323258s
STEP: Saw pod success
Jan 26 11:09:37.872: INFO: Pod "client-containers-5134e9a9-402c-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:09:37.909: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5134e9a9-402c-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:09:38.187: INFO: Waiting for pod client-containers-5134e9a9-402c-11ea-b664-0242ac110005 to disappear
Jan 26 11:09:38.192: INFO: Pod client-containers-5134e9a9-402c-11ea-b664-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:09:38.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-xjqpx" for this suite.
Jan 26 11:09:44.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:09:44.500: INFO: namespace: e2e-tests-containers-xjqpx, resource: bindings, ignored listing per whitelist
Jan 26 11:09:44.517: INFO: namespace e2e-tests-containers-xjqpx deletion completed in 6.312009812s

• [SLOW TEST:17.546 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:09:44.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-wwh7
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 11:09:44.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wwh7" in namespace "e2e-tests-subpath-m7sbd" to be "success or failure"
Jan 26 11:09:44.897: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Pending", Reason="", readiness=false. Elapsed: 100.146662ms
Jan 26 11:09:46.911: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113425611s
Jan 26 11:09:48.923: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125695711s
Jan 26 11:09:51.359: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562324557s
Jan 26 11:09:53.718: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.921323998s
Jan 26 11:09:55.725: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.928070596s
Jan 26 11:09:57.962: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.1643769s
Jan 26 11:09:59.975: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.178203789s
Jan 26 11:10:01.995: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 17.197944354s
Jan 26 11:10:04.011: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 19.213957306s
Jan 26 11:10:06.030: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 21.232591662s
Jan 26 11:10:08.085: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 23.288327307s
Jan 26 11:10:10.111: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 25.313422192s
Jan 26 11:10:12.175: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 27.377903915s
Jan 26 11:10:14.186: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 29.388976803s
Jan 26 11:10:16.204: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 31.406528012s
Jan 26 11:10:18.220: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Running", Reason="", readiness=false. Elapsed: 33.423333969s
Jan 26 11:10:20.236: INFO: Pod "pod-subpath-test-configmap-wwh7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.439023069s
STEP: Saw pod success
Jan 26 11:10:20.236: INFO: Pod "pod-subpath-test-configmap-wwh7" satisfied condition "success or failure"
Jan 26 11:10:20.246: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-wwh7 container test-container-subpath-configmap-wwh7: 
STEP: delete the pod
Jan 26 11:10:20.342: INFO: Waiting for pod pod-subpath-test-configmap-wwh7 to disappear
Jan 26 11:10:20.405: INFO: Pod pod-subpath-test-configmap-wwh7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-wwh7
Jan 26 11:10:20.405: INFO: Deleting pod "pod-subpath-test-configmap-wwh7" in namespace "e2e-tests-subpath-m7sbd"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:10:20.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-m7sbd" for this suite.
Jan 26 11:10:26.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:10:26.714: INFO: namespace: e2e-tests-subpath-m7sbd, resource: bindings, ignored listing per whitelist
Jan 26 11:10:26.720: INFO: namespace e2e-tests-subpath-m7sbd deletion completed in 6.293136501s

• [SLOW TEST:42.203 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:10:26.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 26 11:10:26.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-vqlp6 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 26 11:10:38.140: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0126 11:10:36.767246     408 log.go:172] (0xc00077c160) (0xc0008d23c0) Create stream\nI0126 11:10:36.767424     408 log.go:172] (0xc00077c160) (0xc0008d23c0) Stream added, broadcasting: 1\nI0126 11:10:36.773838     408 log.go:172] (0xc00077c160) Reply frame received for 1\nI0126 11:10:36.773869     408 log.go:172] (0xc00077c160) (0xc0003a2fa0) Create stream\nI0126 11:10:36.773877     408 log.go:172] (0xc00077c160) (0xc0003a2fa0) Stream added, broadcasting: 3\nI0126 11:10:36.775051     408 log.go:172] (0xc00077c160) Reply frame received for 3\nI0126 11:10:36.775074     408 log.go:172] (0xc00077c160) (0xc0003a3040) Create stream\nI0126 11:10:36.775080     408 log.go:172] (0xc00077c160) (0xc0003a3040) Stream added, broadcasting: 5\nI0126 11:10:36.776067     408 log.go:172] (0xc00077c160) Reply frame received for 5\nI0126 11:10:36.776103     408 log.go:172] (0xc00077c160) (0xc000a44000) Create stream\nI0126 11:10:36.776117     408 log.go:172] (0xc00077c160) (0xc000a44000) Stream added, broadcasting: 7\nI0126 11:10:36.777067     408 log.go:172] (0xc00077c160) Reply frame received for 7\nI0126 11:10:36.777275     408 log.go:172] (0xc0003a2fa0) (3) Writing data frame\nI0126 11:10:36.777398     408 log.go:172] (0xc0003a2fa0) (3) Writing data frame\nI0126 11:10:36.825842     408 log.go:172] (0xc00077c160) Data frame received for 5\nI0126 11:10:36.825900     408 log.go:172] (0xc0003a3040) (5) Data frame handling\nI0126 11:10:36.825951     408 log.go:172] (0xc0003a3040) (5) Data frame sent\nI0126 11:10:36.831400     408 log.go:172] (0xc00077c160) Data frame received for 5\nI0126 11:10:36.831427     408 log.go:172] (0xc0003a3040) (5) Data frame handling\nI0126 11:10:36.831438     408 log.go:172] (0xc0003a3040) (5) Data frame sent\nI0126 11:10:38.078769     408 log.go:172] (0xc00077c160) Data frame received for 1\nI0126 11:10:38.078889     408 log.go:172] (0xc00077c160) (0xc0003a3040) Stream removed, broadcasting: 5\nI0126 11:10:38.079002     408 log.go:172] (0xc0008d23c0) (1) Data frame handling\nI0126 11:10:38.079044     408 log.go:172] (0xc0008d23c0) (1) Data frame sent\nI0126 11:10:38.079185     408 log.go:172] (0xc00077c160) (0xc0003a2fa0) Stream removed, broadcasting: 3\nI0126 11:10:38.079245     408 log.go:172] (0xc00077c160) (0xc0008d23c0) Stream removed, broadcasting: 1\nI0126 11:10:38.079423     408 log.go:172] (0xc00077c160) (0xc000a44000) Stream removed, broadcasting: 7\nI0126 11:10:38.079458     408 log.go:172] (0xc00077c160) Go away received\nI0126 11:10:38.079833     408 log.go:172] (0xc00077c160) (0xc0008d23c0) Stream removed, broadcasting: 1\nI0126 11:10:38.079924     408 log.go:172] (0xc00077c160) (0xc0003a2fa0) Stream removed, broadcasting: 3\nI0126 11:10:38.079953     408 log.go:172] (0xc00077c160) (0xc0003a3040) Stream removed, broadcasting: 5\nI0126 11:10:38.079970     408 log.go:172] (0xc00077c160) (0xc000a44000) Stream removed, broadcasting: 7\n"
Jan 26 11:10:38.140: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:10:40.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vqlp6" for this suite.
Jan 26 11:10:46.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:10:46.687: INFO: namespace: e2e-tests-kubectl-vqlp6, resource: bindings, ignored listing per whitelist
Jan 26 11:10:46.762: INFO: namespace e2e-tests-kubectl-vqlp6 deletion completed in 6.197509317s

• [SLOW TEST:20.041 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:10:46.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-b8vvb
Jan 26 11:10:57.080: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-b8vvb
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 11:10:57.084: INFO: Initial restart count of pod liveness-http is 0
Jan 26 11:11:17.413: INFO: Restart count of pod e2e-tests-container-probe-b8vvb/liveness-http is now 1 (20.329132592s elapsed)
Jan 26 11:11:37.909: INFO: Restart count of pod e2e-tests-container-probe-b8vvb/liveness-http is now 2 (40.825527612s elapsed)
Jan 26 11:11:58.188: INFO: Restart count of pod e2e-tests-container-probe-b8vvb/liveness-http is now 3 (1m1.104133078s elapsed)
Jan 26 11:12:16.868: INFO: Restart count of pod e2e-tests-container-probe-b8vvb/liveness-http is now 4 (1m19.784752797s elapsed)
Jan 26 11:12:37.080: INFO: Restart count of pod e2e-tests-container-probe-b8vvb/liveness-http is now 5 (1m39.996072508s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:12:37.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-b8vvb" for this suite.
Jan 26 11:12:43.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:12:43.489: INFO: namespace: e2e-tests-container-probe-b8vvb, resource: bindings, ignored listing per whitelist
Jan 26 11:12:43.489: INFO: namespace e2e-tests-container-probe-b8vvb deletion completed in 6.270528904s

• [SLOW TEST:116.727 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:12:43.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-c63c2b73-402c-11ea-b664-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-c63c2b73-402c-11ea-b664-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:14:00.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-knnvw" for this suite.
Jan 26 11:14:24.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:14:24.462: INFO: namespace: e2e-tests-configmap-knnvw, resource: bindings, ignored listing per whitelist
Jan 26 11:14:24.582: INFO: namespace e2e-tests-configmap-knnvw deletion completed in 24.196403536s

• [SLOW TEST:101.093 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:14:24.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-02861819-402d-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 11:14:24.919: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-gz5jv" to be "success or failure"
Jan 26 11:14:24.924: INFO: Pod "pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.122989ms
Jan 26 11:14:26.936: INFO: Pod "pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016398086s
Jan 26 11:14:28.961: INFO: Pod "pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041941662s
Jan 26 11:14:30.975: INFO: Pod "pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05558197s
Jan 26 11:14:32.999: INFO: Pod "pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079616981s
Jan 26 11:14:35.029: INFO: Pod "pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110115589s
STEP: Saw pod success
Jan 26 11:14:35.029: INFO: Pod "pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:14:35.035: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 11:14:35.132: INFO: Waiting for pod pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005 to disappear
Jan 26 11:14:35.147: INFO: Pod pod-projected-secrets-028f76ae-402d-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:14:35.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gz5jv" for this suite.
Jan 26 11:14:41.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:14:41.498: INFO: namespace: e2e-tests-projected-gz5jv, resource: bindings, ignored listing per whitelist
Jan 26 11:14:41.582: INFO: namespace e2e-tests-projected-gz5jv deletion completed in 6.424947962s

• [SLOW TEST:16.999 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:14:41.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 11:14:41.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-vtqr9'
Jan 26 11:14:42.035: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 11:14:42.035: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan 26 11:14:42.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-vtqr9'
Jan 26 11:14:42.476: INFO: stderr: ""
Jan 26 11:14:42.476: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:14:42.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vtqr9" for this suite.
Jan 26 11:14:50.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:14:51.102: INFO: namespace: e2e-tests-kubectl-vtqr9, resource: bindings, ignored listing per whitelist
Jan 26 11:14:51.108: INFO: namespace e2e-tests-kubectl-vtqr9 deletion completed in 8.61264009s

• [SLOW TEST:9.526 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:14:51.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6zlhh.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6zlhh.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6zlhh.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6zlhh.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6zlhh.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6zlhh.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 11:15:05.449: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.453: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.458: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.463: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.475: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.482: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.487: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6zlhh.svc.cluster.local from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.492: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.498: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.502: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.507: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.511: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.515: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.518: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.521: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.524: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.527: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6zlhh.svc.cluster.local from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.530: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.533: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.536: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-123f6a9f-402d-11ea-b664-0242ac110005)
Jan 26 11:15:05.536: INFO: Lookups using e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6zlhh.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6zlhh.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 26 11:15:10.832: INFO: DNS probes using e2e-tests-dns-6zlhh/dns-test-123f6a9f-402d-11ea-b664-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:15:10.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-6zlhh" for this suite.
Jan 26 11:15:19.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:15:19.127: INFO: namespace: e2e-tests-dns-6zlhh, resource: bindings, ignored listing per whitelist
Jan 26 11:15:19.221: INFO: namespace e2e-tests-dns-6zlhh deletion completed in 8.207867911s

• [SLOW TEST:28.113 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:15:19.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:15:19.530: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 26 11:15:19.541: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p6kdz/daemonsets","resourceVersion":"19513327"},"items":null}

Jan 26 11:15:19.545: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p6kdz/pods","resourceVersion":"19513327"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:15:19.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-p6kdz" for this suite.
Jan 26 11:15:25.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:15:25.794: INFO: namespace: e2e-tests-daemonsets-p6kdz, resource: bindings, ignored listing per whitelist
Jan 26 11:15:25.806: INFO: namespace e2e-tests-daemonsets-p6kdz deletion completed in 6.2473206s

S [SKIPPING] [6.585 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 26 11:15:19.530: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:15:25.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 26 11:15:34.789: INFO: Successfully updated pod "labelsupdate27038900-402d-11ea-b664-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:15:38.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dz4kq" for this suite.
Jan 26 11:16:00.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:16:01.061: INFO: namespace: e2e-tests-projected-dz4kq, resource: bindings, ignored listing per whitelist
Jan 26 11:16:01.105: INFO: namespace e2e-tests-projected-dz4kq deletion completed in 22.175767498s

• [SLOW TEST:35.298 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:16:01.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:16:01.293: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 26 11:16:01.316: INFO: Number of nodes with available pods: 0
Jan 26 11:16:01.316: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 26 11:16:01.357: INFO: Number of nodes with available pods: 0
Jan 26 11:16:01.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:02.378: INFO: Number of nodes with available pods: 0
Jan 26 11:16:02.378: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:03.379: INFO: Number of nodes with available pods: 0
Jan 26 11:16:03.379: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:04.400: INFO: Number of nodes with available pods: 0
Jan 26 11:16:04.401: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:05.377: INFO: Number of nodes with available pods: 0
Jan 26 11:16:05.377: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:07.360: INFO: Number of nodes with available pods: 0
Jan 26 11:16:07.361: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:09.129: INFO: Number of nodes with available pods: 0
Jan 26 11:16:09.129: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:09.384: INFO: Number of nodes with available pods: 0
Jan 26 11:16:09.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:10.373: INFO: Number of nodes with available pods: 0
Jan 26 11:16:10.373: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:11.444: INFO: Number of nodes with available pods: 1
Jan 26 11:16:11.444: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 26 11:16:11.580: INFO: Number of nodes with available pods: 1
Jan 26 11:16:11.580: INFO: Number of running nodes: 0, number of available pods: 1
Jan 26 11:16:12.625: INFO: Number of nodes with available pods: 0
Jan 26 11:16:12.626: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 26 11:16:12.672: INFO: Number of nodes with available pods: 0
Jan 26 11:16:12.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:13.775: INFO: Number of nodes with available pods: 0
Jan 26 11:16:13.775: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:14.994: INFO: Number of nodes with available pods: 0
Jan 26 11:16:14.994: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:15.706: INFO: Number of nodes with available pods: 0
Jan 26 11:16:15.706: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:16.686: INFO: Number of nodes with available pods: 0
Jan 26 11:16:16.686: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:17.707: INFO: Number of nodes with available pods: 0
Jan 26 11:16:17.707: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:18.683: INFO: Number of nodes with available pods: 0
Jan 26 11:16:18.683: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:19.766: INFO: Number of nodes with available pods: 0
Jan 26 11:16:19.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:20.683: INFO: Number of nodes with available pods: 0
Jan 26 11:16:20.683: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:21.684: INFO: Number of nodes with available pods: 0
Jan 26 11:16:21.684: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:22.682: INFO: Number of nodes with available pods: 0
Jan 26 11:16:22.682: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:23.690: INFO: Number of nodes with available pods: 0
Jan 26 11:16:23.690: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:25.217: INFO: Number of nodes with available pods: 0
Jan 26 11:16:25.217: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:25.690: INFO: Number of nodes with available pods: 0
Jan 26 11:16:25.690: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:26.692: INFO: Number of nodes with available pods: 0
Jan 26 11:16:26.692: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:27.687: INFO: Number of nodes with available pods: 0
Jan 26 11:16:27.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:16:28.836: INFO: Number of nodes with available pods: 1
Jan 26 11:16:28.836: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wl8dm, will wait for the garbage collector to delete the pods
Jan 26 11:16:28.927: INFO: Deleting DaemonSet.extensions daemon-set took: 23.690675ms
Jan 26 11:16:29.028: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.422118ms
Jan 26 11:16:42.643: INFO: Number of nodes with available pods: 0
Jan 26 11:16:42.643: INFO: Number of running nodes: 0, number of available pods: 0
Jan 26 11:16:42.648: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wl8dm/daemonsets","resourceVersion":"19513510"},"items":null}

Jan 26 11:16:42.652: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wl8dm/pods","resourceVersion":"19513510"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:16:42.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-wl8dm" for this suite.
Jan 26 11:16:48.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:16:48.912: INFO: namespace: e2e-tests-daemonsets-wl8dm, resource: bindings, ignored listing per whitelist
Jan 26 11:16:49.014: INFO: namespace e2e-tests-daemonsets-wl8dm deletion completed in 6.265202497s

• [SLOW TEST:47.909 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:16:49.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:16:49.311: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-p7rd4" to be "success or failure"
Jan 26 11:16:49.416: INFO: Pod "downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 104.403846ms
Jan 26 11:16:51.608: INFO: Pod "downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296668389s
Jan 26 11:16:53.632: INFO: Pod "downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320862989s
Jan 26 11:16:55.650: INFO: Pod "downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339191175s
Jan 26 11:16:57.971: INFO: Pod "downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.659747053s
Jan 26 11:17:00.535: INFO: Pod "downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.223558368s
STEP: Saw pod success
Jan 26 11:17:00.535: INFO: Pod "downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:17:00.543: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:17:00.775: INFO: Waiting for pod downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005 to disappear
Jan 26 11:17:00.883: INFO: Pod downwardapi-volume-58a1a33c-402d-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:17:00.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-p7rd4" for this suite.
Jan 26 11:17:07.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:17:07.162: INFO: namespace: e2e-tests-downward-api-p7rd4, resource: bindings, ignored listing per whitelist
Jan 26 11:17:07.207: INFO: namespace e2e-tests-downward-api-p7rd4 deletion completed in 6.292132327s

• [SLOW TEST:18.192 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:17:07.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:17:07.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 26 11:17:07.474: INFO: stderr: ""
Jan 26 11:17:07.474: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 26 11:17:07.478: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:17:07.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zd6v4" for this suite.
Jan 26 11:17:13.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:17:13.656: INFO: namespace: e2e-tests-kubectl-zd6v4, resource: bindings, ignored listing per whitelist
Jan 26 11:17:13.791: INFO: namespace e2e-tests-kubectl-zd6v4 deletion completed in 6.232560635s

S [SKIPPING] [6.584 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 26 11:17:07.478: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:17:13.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 26 11:17:14.132: INFO: Waiting up to 5m0s for pod "pod-6760a9b0-402d-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-shmd6" to be "success or failure"
Jan 26 11:17:14.156: INFO: Pod "pod-6760a9b0-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.389484ms
Jan 26 11:17:16.168: INFO: Pod "pod-6760a9b0-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03647237s
Jan 26 11:17:18.737: INFO: Pod "pod-6760a9b0-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.604785634s
Jan 26 11:17:20.750: INFO: Pod "pod-6760a9b0-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.61847712s
Jan 26 11:17:22.761: INFO: Pod "pod-6760a9b0-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.629586975s
Jan 26 11:17:24.810: INFO: Pod "pod-6760a9b0-402d-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.678404217s
STEP: Saw pod success
Jan 26 11:17:24.810: INFO: Pod "pod-6760a9b0-402d-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:17:24.893: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6760a9b0-402d-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:17:24.989: INFO: Waiting for pod pod-6760a9b0-402d-11ea-b664-0242ac110005 to disappear
Jan 26 11:17:25.004: INFO: Pod pod-6760a9b0-402d-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:17:25.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-shmd6" for this suite.
Jan 26 11:17:31.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:17:31.225: INFO: namespace: e2e-tests-emptydir-shmd6, resource: bindings, ignored listing per whitelist
Jan 26 11:17:31.251: INFO: namespace e2e-tests-emptydir-shmd6 deletion completed in 6.175654653s

• [SLOW TEST:17.460 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:17:31.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-z6mn8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 26 11:17:31.434: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 26 11:18:09.896: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-z6mn8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:18:09.896: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:18:09.963769       8 log.go:172] (0xc00070bd90) (0xc002160e60) Create stream
I0126 11:18:09.963845       8 log.go:172] (0xc00070bd90) (0xc002160e60) Stream added, broadcasting: 1
I0126 11:18:09.967766       8 log.go:172] (0xc00070bd90) Reply frame received for 1
I0126 11:18:09.967829       8 log.go:172] (0xc00070bd90) (0xc001fc20a0) Create stream
I0126 11:18:09.967845       8 log.go:172] (0xc00070bd90) (0xc001fc20a0) Stream added, broadcasting: 3
I0126 11:18:09.968890       8 log.go:172] (0xc00070bd90) Reply frame received for 3
I0126 11:18:09.968925       8 log.go:172] (0xc00070bd90) (0xc0019b30e0) Create stream
I0126 11:18:09.968935       8 log.go:172] (0xc00070bd90) (0xc0019b30e0) Stream added, broadcasting: 5
I0126 11:18:09.969939       8 log.go:172] (0xc00070bd90) Reply frame received for 5
I0126 11:18:11.115729       8 log.go:172] (0xc00070bd90) Data frame received for 3
I0126 11:18:11.115801       8 log.go:172] (0xc001fc20a0) (3) Data frame handling
I0126 11:18:11.115840       8 log.go:172] (0xc001fc20a0) (3) Data frame sent
I0126 11:18:11.253634       8 log.go:172] (0xc00070bd90) Data frame received for 1
I0126 11:18:11.253736       8 log.go:172] (0xc00070bd90) (0xc001fc20a0) Stream removed, broadcasting: 3
I0126 11:18:11.253821       8 log.go:172] (0xc002160e60) (1) Data frame handling
I0126 11:18:11.253888       8 log.go:172] (0xc002160e60) (1) Data frame sent
I0126 11:18:11.253934       8 log.go:172] (0xc00070bd90) (0xc0019b30e0) Stream removed, broadcasting: 5
I0126 11:18:11.253960       8 log.go:172] (0xc00070bd90) (0xc002160e60) Stream removed, broadcasting: 1
I0126 11:18:11.253987       8 log.go:172] (0xc00070bd90) Go away received
I0126 11:18:11.254227       8 log.go:172] (0xc00070bd90) (0xc002160e60) Stream removed, broadcasting: 1
I0126 11:18:11.254272       8 log.go:172] (0xc00070bd90) (0xc001fc20a0) Stream removed, broadcasting: 3
I0126 11:18:11.254284       8 log.go:172] (0xc00070bd90) (0xc0019b30e0) Stream removed, broadcasting: 5
Jan 26 11:18:11.254: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:18:11.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-z6mn8" for this suite.
Jan 26 11:18:35.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:18:35.393: INFO: namespace: e2e-tests-pod-network-test-z6mn8, resource: bindings, ignored listing per whitelist
Jan 26 11:18:35.578: INFO: namespace e2e-tests-pod-network-test-z6mn8 deletion completed in 24.3091047s

• [SLOW TEST:64.327 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:18:35.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-xq75m
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan 26 11:18:35.916: INFO: Found 0 stateful pods, waiting for 3
Jan 26 11:18:45.933: INFO: Found 2 stateful pods, waiting for 3
Jan 26 11:18:55.932: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:18:55.932: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:18:55.932: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 26 11:19:05.980: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:19:05.980: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:19:05.980: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 26 11:19:06.113: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 26 11:19:16.257: INFO: Updating stateful set ss2
Jan 26 11:19:16.281: INFO: Waiting for Pod e2e-tests-statefulset-xq75m/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 11:19:26.305: INFO: Waiting for Pod e2e-tests-statefulset-xq75m/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 26 11:19:36.685: INFO: Found 2 stateful pods, waiting for 3
Jan 26 11:19:46.710: INFO: Found 2 stateful pods, waiting for 3
Jan 26 11:19:56.740: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:19:56.740: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:19:56.740: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 26 11:20:06.713: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:20:06.713: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:20:06.713: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 26 11:20:06.762: INFO: Updating stateful set ss2
Jan 26 11:20:06.813: INFO: Waiting for Pod e2e-tests-statefulset-xq75m/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 11:20:16.840: INFO: Waiting for Pod e2e-tests-statefulset-xq75m/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 11:20:26.926: INFO: Updating stateful set ss2
Jan 26 11:20:27.074: INFO: Waiting for StatefulSet e2e-tests-statefulset-xq75m/ss2 to complete update
Jan 26 11:20:27.074: INFO: Waiting for Pod e2e-tests-statefulset-xq75m/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 11:20:37.099: INFO: Waiting for StatefulSet e2e-tests-statefulset-xq75m/ss2 to complete update
Jan 26 11:20:37.099: INFO: Waiting for Pod e2e-tests-statefulset-xq75m/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 26 11:20:47.104: INFO: Waiting for StatefulSet e2e-tests-statefulset-xq75m/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 26 11:20:57.102: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xq75m
Jan 26 11:20:57.110: INFO: Scaling statefulset ss2 to 0
Jan 26 11:21:17.164: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 11:21:17.172: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:21:17.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-xq75m" for this suite.
Jan 26 11:21:25.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:21:25.515: INFO: namespace: e2e-tests-statefulset-xq75m, resource: bindings, ignored listing per whitelist
Jan 26 11:21:25.559: INFO: namespace e2e-tests-statefulset-xq75m deletion completed in 8.282061354s

• [SLOW TEST:169.980 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:21:25.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:21:25.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-jmpmk" to be "success or failure"
Jan 26 11:21:25.914: INFO: Pod "downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.835935ms
Jan 26 11:21:27.981: INFO: Pod "downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125019907s
Jan 26 11:21:30.007: INFO: Pod "downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151534749s
Jan 26 11:21:32.101: INFO: Pod "downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245215886s
Jan 26 11:21:34.358: INFO: Pod "downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502184813s
Jan 26 11:21:36.380: INFO: Pod "downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.524127995s
STEP: Saw pod success
Jan 26 11:21:36.380: INFO: Pod "downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:21:36.387: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:21:37.102: INFO: Waiting for pod downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005 to disappear
Jan 26 11:21:37.116: INFO: Pod downwardapi-volume-fd6f9f05-402d-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:21:37.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jmpmk" for this suite.
Jan 26 11:21:43.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:21:43.400: INFO: namespace: e2e-tests-downward-api-jmpmk, resource: bindings, ignored listing per whitelist
Jan 26 11:21:43.484: INFO: namespace e2e-tests-downward-api-jmpmk deletion completed in 6.353700029s

• [SLOW TEST:17.925 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:21:43.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 26 11:21:43.824: INFO: Waiting up to 5m0s for pod "client-containers-081fe8bc-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-containers-v9f9s" to be "success or failure"
Jan 26 11:21:43.831: INFO: Pod "client-containers-081fe8bc-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.506316ms
Jan 26 11:21:45.856: INFO: Pod "client-containers-081fe8bc-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031893376s
Jan 26 11:21:47.886: INFO: Pod "client-containers-081fe8bc-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061674789s
Jan 26 11:21:49.905: INFO: Pod "client-containers-081fe8bc-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080389226s
Jan 26 11:21:51.925: INFO: Pod "client-containers-081fe8bc-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100532353s
Jan 26 11:21:53.939: INFO: Pod "client-containers-081fe8bc-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115067488s
STEP: Saw pod success
Jan 26 11:21:53.939: INFO: Pod "client-containers-081fe8bc-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:21:53.945: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-081fe8bc-402e-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:21:54.353: INFO: Waiting for pod client-containers-081fe8bc-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:21:54.366: INFO: Pod client-containers-081fe8bc-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:21:54.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-v9f9s" for this suite.
Jan 26 11:22:00.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:22:00.833: INFO: namespace: e2e-tests-containers-v9f9s, resource: bindings, ignored listing per whitelist
Jan 26 11:22:00.910: INFO: namespace e2e-tests-containers-v9f9s deletion completed in 6.534605094s

• [SLOW TEST:17.425 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:22:00.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 26 11:22:25.650: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:25.650: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:25.756313       8 log.go:172] (0xc0011b0420) (0xc001179400) Create stream
I0126 11:22:25.756427       8 log.go:172] (0xc0011b0420) (0xc001179400) Stream added, broadcasting: 1
I0126 11:22:25.789081       8 log.go:172] (0xc0011b0420) Reply frame received for 1
I0126 11:22:25.789242       8 log.go:172] (0xc0011b0420) (0xc001da8b40) Create stream
I0126 11:22:25.789256       8 log.go:172] (0xc0011b0420) (0xc001da8b40) Stream added, broadcasting: 3
I0126 11:22:25.791916       8 log.go:172] (0xc0011b0420) Reply frame received for 3
I0126 11:22:25.791952       8 log.go:172] (0xc0011b0420) (0xc0011794a0) Create stream
I0126 11:22:25.791970       8 log.go:172] (0xc0011b0420) (0xc0011794a0) Stream added, broadcasting: 5
I0126 11:22:25.795918       8 log.go:172] (0xc0011b0420) Reply frame received for 5
I0126 11:22:26.097713       8 log.go:172] (0xc0011b0420) Data frame received for 3
I0126 11:22:26.097800       8 log.go:172] (0xc001da8b40) (3) Data frame handling
I0126 11:22:26.097818       8 log.go:172] (0xc001da8b40) (3) Data frame sent
I0126 11:22:26.220286       8 log.go:172] (0xc0011b0420) (0xc001da8b40) Stream removed, broadcasting: 3
I0126 11:22:26.220387       8 log.go:172] (0xc0011b0420) Data frame received for 1
I0126 11:22:26.220419       8 log.go:172] (0xc0011b0420) (0xc0011794a0) Stream removed, broadcasting: 5
I0126 11:22:26.220443       8 log.go:172] (0xc001179400) (1) Data frame handling
I0126 11:22:26.220474       8 log.go:172] (0xc001179400) (1) Data frame sent
I0126 11:22:26.220485       8 log.go:172] (0xc0011b0420) (0xc001179400) Stream removed, broadcasting: 1
I0126 11:22:26.220501       8 log.go:172] (0xc0011b0420) Go away received
I0126 11:22:26.220634       8 log.go:172] (0xc0011b0420) (0xc001179400) Stream removed, broadcasting: 1
I0126 11:22:26.220648       8 log.go:172] (0xc0011b0420) (0xc001da8b40) Stream removed, broadcasting: 3
I0126 11:22:26.220659       8 log.go:172] (0xc0011b0420) (0xc0011794a0) Stream removed, broadcasting: 5
Jan 26 11:22:26.220: INFO: Exec stderr: ""
Jan 26 11:22:26.220: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:26.220: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:26.293629       8 log.go:172] (0xc000b0b760) (0xc000a237c0) Create stream
I0126 11:22:26.293704       8 log.go:172] (0xc000b0b760) (0xc000a237c0) Stream added, broadcasting: 1
I0126 11:22:26.298894       8 log.go:172] (0xc000b0b760) Reply frame received for 1
I0126 11:22:26.298945       8 log.go:172] (0xc000b0b760) (0xc001da8c80) Create stream
I0126 11:22:26.298964       8 log.go:172] (0xc000b0b760) (0xc001da8c80) Stream added, broadcasting: 3
I0126 11:22:26.300543       8 log.go:172] (0xc000b0b760) Reply frame received for 3
I0126 11:22:26.300572       8 log.go:172] (0xc000b0b760) (0xc001da8d20) Create stream
I0126 11:22:26.300584       8 log.go:172] (0xc000b0b760) (0xc001da8d20) Stream added, broadcasting: 5
I0126 11:22:26.301604       8 log.go:172] (0xc000b0b760) Reply frame received for 5
I0126 11:22:26.414985       8 log.go:172] (0xc000b0b760) Data frame received for 3
I0126 11:22:26.415049       8 log.go:172] (0xc001da8c80) (3) Data frame handling
I0126 11:22:26.415089       8 log.go:172] (0xc001da8c80) (3) Data frame sent
I0126 11:22:26.647039       8 log.go:172] (0xc000b0b760) (0xc001da8d20) Stream removed, broadcasting: 5
I0126 11:22:26.647188       8 log.go:172] (0xc000b0b760) Data frame received for 1
I0126 11:22:26.647235       8 log.go:172] (0xc000b0b760) (0xc001da8c80) Stream removed, broadcasting: 3
I0126 11:22:26.647297       8 log.go:172] (0xc000a237c0) (1) Data frame handling
I0126 11:22:26.647342       8 log.go:172] (0xc000a237c0) (1) Data frame sent
I0126 11:22:26.647367       8 log.go:172] (0xc000b0b760) (0xc000a237c0) Stream removed, broadcasting: 1
I0126 11:22:26.647387       8 log.go:172] (0xc000b0b760) Go away received
I0126 11:22:26.647637       8 log.go:172] (0xc000b0b760) (0xc000a237c0) Stream removed, broadcasting: 1
I0126 11:22:26.647658       8 log.go:172] (0xc000b0b760) (0xc001da8c80) Stream removed, broadcasting: 3
I0126 11:22:26.647676       8 log.go:172] (0xc000b0b760) (0xc001da8d20) Stream removed, broadcasting: 5
Jan 26 11:22:26.647: INFO: Exec stderr: ""
Jan 26 11:22:26.647: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:26.647: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:26.717004       8 log.go:172] (0xc001d4e370) (0xc001526fa0) Create stream
I0126 11:22:26.717058       8 log.go:172] (0xc001d4e370) (0xc001526fa0) Stream added, broadcasting: 1
I0126 11:22:26.723701       8 log.go:172] (0xc001d4e370) Reply frame received for 1
I0126 11:22:26.723738       8 log.go:172] (0xc001d4e370) (0xc000a23860) Create stream
I0126 11:22:26.723748       8 log.go:172] (0xc001d4e370) (0xc000a23860) Stream added, broadcasting: 3
I0126 11:22:26.724495       8 log.go:172] (0xc001d4e370) Reply frame received for 3
I0126 11:22:26.724515       8 log.go:172] (0xc001d4e370) (0xc000a23900) Create stream
I0126 11:22:26.724534       8 log.go:172] (0xc001d4e370) (0xc000a23900) Stream added, broadcasting: 5
I0126 11:22:26.725940       8 log.go:172] (0xc001d4e370) Reply frame received for 5
I0126 11:22:26.821124       8 log.go:172] (0xc001d4e370) Data frame received for 3
I0126 11:22:26.821317       8 log.go:172] (0xc000a23860) (3) Data frame handling
I0126 11:22:26.821356       8 log.go:172] (0xc000a23860) (3) Data frame sent
I0126 11:22:26.958343       8 log.go:172] (0xc001d4e370) (0xc000a23860) Stream removed, broadcasting: 3
I0126 11:22:26.958431       8 log.go:172] (0xc001d4e370) Data frame received for 1
I0126 11:22:26.958438       8 log.go:172] (0xc001526fa0) (1) Data frame handling
I0126 11:22:26.958451       8 log.go:172] (0xc001526fa0) (1) Data frame sent
I0126 11:22:26.958457       8 log.go:172] (0xc001d4e370) (0xc001526fa0) Stream removed, broadcasting: 1
I0126 11:22:26.958514       8 log.go:172] (0xc001d4e370) (0xc000a23900) Stream removed, broadcasting: 5
I0126 11:22:26.958589       8 log.go:172] (0xc001d4e370) Go away received
I0126 11:22:26.958610       8 log.go:172] (0xc001d4e370) (0xc001526fa0) Stream removed, broadcasting: 1
I0126 11:22:26.958620       8 log.go:172] (0xc001d4e370) (0xc000a23860) Stream removed, broadcasting: 3
I0126 11:22:26.958676       8 log.go:172] (0xc001d4e370) (0xc000a23900) Stream removed, broadcasting: 5
Jan 26 11:22:26.958: INFO: Exec stderr: ""
Jan 26 11:22:26.958: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:26.958: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:27.063679       8 log.go:172] (0xc001d4e840) (0xc001527220) Create stream
I0126 11:22:27.063733       8 log.go:172] (0xc001d4e840) (0xc001527220) Stream added, broadcasting: 1
I0126 11:22:27.067467       8 log.go:172] (0xc001d4e840) Reply frame received for 1
I0126 11:22:27.067521       8 log.go:172] (0xc001d4e840) (0xc000eae780) Create stream
I0126 11:22:27.067534       8 log.go:172] (0xc001d4e840) (0xc000eae780) Stream added, broadcasting: 3
I0126 11:22:27.068315       8 log.go:172] (0xc001d4e840) Reply frame received for 3
I0126 11:22:27.068346       8 log.go:172] (0xc001d4e840) (0xc000a239a0) Create stream
I0126 11:22:27.068358       8 log.go:172] (0xc001d4e840) (0xc000a239a0) Stream added, broadcasting: 5
I0126 11:22:27.069153       8 log.go:172] (0xc001d4e840) Reply frame received for 5
I0126 11:22:27.146426       8 log.go:172] (0xc001d4e840) Data frame received for 3
I0126 11:22:27.146526       8 log.go:172] (0xc000eae780) (3) Data frame handling
I0126 11:22:27.146628       8 log.go:172] (0xc000eae780) (3) Data frame sent
I0126 11:22:27.258615       8 log.go:172] (0xc001d4e840) (0xc000eae780) Stream removed, broadcasting: 3
I0126 11:22:27.258750       8 log.go:172] (0xc001d4e840) Data frame received for 1
I0126 11:22:27.258763       8 log.go:172] (0xc001527220) (1) Data frame handling
I0126 11:22:27.258779       8 log.go:172] (0xc001527220) (1) Data frame sent
I0126 11:22:27.258822       8 log.go:172] (0xc001d4e840) (0xc001527220) Stream removed, broadcasting: 1
I0126 11:22:27.258887       8 log.go:172] (0xc001d4e840) (0xc000a239a0) Stream removed, broadcasting: 5
I0126 11:22:27.258924       8 log.go:172] (0xc001d4e840) Go away received
I0126 11:22:27.258978       8 log.go:172] (0xc001d4e840) (0xc001527220) Stream removed, broadcasting: 1
I0126 11:22:27.259013       8 log.go:172] (0xc001d4e840) (0xc000eae780) Stream removed, broadcasting: 3
I0126 11:22:27.259030       8 log.go:172] (0xc001d4e840) (0xc000a239a0) Stream removed, broadcasting: 5
Jan 26 11:22:27.259: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 26 11:22:27.259: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:27.259: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:27.327067       8 log.go:172] (0xc000b0bc30) (0xc000a23f40) Create stream
I0126 11:22:27.327347       8 log.go:172] (0xc000b0bc30) (0xc000a23f40) Stream added, broadcasting: 1
I0126 11:22:27.333312       8 log.go:172] (0xc000b0bc30) Reply frame received for 1
I0126 11:22:27.333353       8 log.go:172] (0xc000b0bc30) (0xc001179540) Create stream
I0126 11:22:27.333363       8 log.go:172] (0xc000b0bc30) (0xc001179540) Stream added, broadcasting: 3
I0126 11:22:27.334462       8 log.go:172] (0xc000b0bc30) Reply frame received for 3
I0126 11:22:27.334480       8 log.go:172] (0xc000b0bc30) (0xc0011795e0) Create stream
I0126 11:22:27.334487       8 log.go:172] (0xc000b0bc30) (0xc0011795e0) Stream added, broadcasting: 5
I0126 11:22:27.335323       8 log.go:172] (0xc000b0bc30) Reply frame received for 5
I0126 11:22:27.456063       8 log.go:172] (0xc000b0bc30) Data frame received for 3
I0126 11:22:27.456135       8 log.go:172] (0xc001179540) (3) Data frame handling
I0126 11:22:27.456160       8 log.go:172] (0xc001179540) (3) Data frame sent
I0126 11:22:27.608625       8 log.go:172] (0xc000b0bc30) (0xc001179540) Stream removed, broadcasting: 3
I0126 11:22:27.608920       8 log.go:172] (0xc000b0bc30) Data frame received for 1
I0126 11:22:27.608979       8 log.go:172] (0xc000a23f40) (1) Data frame handling
I0126 11:22:27.608990       8 log.go:172] (0xc000a23f40) (1) Data frame sent
I0126 11:22:27.609025       8 log.go:172] (0xc000b0bc30) (0xc0011795e0) Stream removed, broadcasting: 5
I0126 11:22:27.609078       8 log.go:172] (0xc000b0bc30) (0xc000a23f40) Stream removed, broadcasting: 1
I0126 11:22:27.609143       8 log.go:172] (0xc000b0bc30) Go away received
I0126 11:22:27.609444       8 log.go:172] (0xc000b0bc30) (0xc000a23f40) Stream removed, broadcasting: 1
I0126 11:22:27.609462       8 log.go:172] (0xc000b0bc30) (0xc001179540) Stream removed, broadcasting: 3
I0126 11:22:27.609467       8 log.go:172] (0xc000b0bc30) (0xc0011795e0) Stream removed, broadcasting: 5
Jan 26 11:22:27.609: INFO: Exec stderr: ""
Jan 26 11:22:27.609: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:27.609: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:27.667255       8 log.go:172] (0xc00176e2c0) (0xc001da9040) Create stream
I0126 11:22:27.667288       8 log.go:172] (0xc00176e2c0) (0xc001da9040) Stream added, broadcasting: 1
I0126 11:22:27.674806       8 log.go:172] (0xc00176e2c0) Reply frame received for 1
I0126 11:22:27.674885       8 log.go:172] (0xc00176e2c0) (0xc0015272c0) Create stream
I0126 11:22:27.674903       8 log.go:172] (0xc00176e2c0) (0xc0015272c0) Stream added, broadcasting: 3
I0126 11:22:27.676340       8 log.go:172] (0xc00176e2c0) Reply frame received for 3
I0126 11:22:27.676357       8 log.go:172] (0xc00176e2c0) (0xc001179680) Create stream
I0126 11:22:27.676365       8 log.go:172] (0xc00176e2c0) (0xc001179680) Stream added, broadcasting: 5
I0126 11:22:27.677381       8 log.go:172] (0xc00176e2c0) Reply frame received for 5
I0126 11:22:27.756809       8 log.go:172] (0xc00176e2c0) Data frame received for 3
I0126 11:22:27.756846       8 log.go:172] (0xc0015272c0) (3) Data frame handling
I0126 11:22:27.756863       8 log.go:172] (0xc0015272c0) (3) Data frame sent
I0126 11:22:27.896923       8 log.go:172] (0xc00176e2c0) Data frame received for 1
I0126 11:22:27.896979       8 log.go:172] (0xc001da9040) (1) Data frame handling
I0126 11:22:27.897006       8 log.go:172] (0xc001da9040) (1) Data frame sent
I0126 11:22:27.897018       8 log.go:172] (0xc00176e2c0) (0xc001da9040) Stream removed, broadcasting: 1
I0126 11:22:27.897459       8 log.go:172] (0xc00176e2c0) (0xc0015272c0) Stream removed, broadcasting: 3
I0126 11:22:27.897712       8 log.go:172] (0xc00176e2c0) (0xc001179680) Stream removed, broadcasting: 5
I0126 11:22:27.897766       8 log.go:172] (0xc00176e2c0) (0xc001da9040) Stream removed, broadcasting: 1
I0126 11:22:27.897777       8 log.go:172] (0xc00176e2c0) (0xc0015272c0) Stream removed, broadcasting: 3
I0126 11:22:27.897789       8 log.go:172] (0xc00176e2c0) (0xc001179680) Stream removed, broadcasting: 5
Jan 26 11:22:27.898: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 26 11:22:27.898: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:27.898: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:27.970948       8 log.go:172] (0xc001e6a160) (0xc001d841e0) Create stream
I0126 11:22:27.971085       8 log.go:172] (0xc001e6a160) (0xc001d841e0) Stream added, broadcasting: 1
I0126 11:22:27.985228       8 log.go:172] (0xc001e6a160) Reply frame received for 1
I0126 11:22:27.985286       8 log.go:172] (0xc001e6a160) (0xc000a22000) Create stream
I0126 11:22:27.985296       8 log.go:172] (0xc001e6a160) (0xc000a22000) Stream added, broadcasting: 3
I0126 11:22:27.986615       8 log.go:172] (0xc001e6a160) Reply frame received for 3
I0126 11:22:27.986669       8 log.go:172] (0xc001e6a160) (0xc001a16000) Create stream
I0126 11:22:27.986679       8 log.go:172] (0xc001e6a160) (0xc001a16000) Stream added, broadcasting: 5
I0126 11:22:27.987695       8 log.go:172] (0xc001e6a160) Reply frame received for 5
I0126 11:22:28.148333       8 log.go:172] (0xc001e6a160) Data frame received for 3
I0126 11:22:28.148442       8 log.go:172] (0xc000a22000) (3) Data frame handling
I0126 11:22:28.148466       8 log.go:172] (0xc000a22000) (3) Data frame sent
I0126 11:22:28.291005       8 log.go:172] (0xc001e6a160) Data frame received for 1
I0126 11:22:28.291097       8 log.go:172] (0xc001e6a160) (0xc000a22000) Stream removed, broadcasting: 3
I0126 11:22:28.291127       8 log.go:172] (0xc001d841e0) (1) Data frame handling
I0126 11:22:28.291161       8 log.go:172] (0xc001d841e0) (1) Data frame sent
I0126 11:22:28.291195       8 log.go:172] (0xc001e6a160) (0xc001a16000) Stream removed, broadcasting: 5
I0126 11:22:28.291216       8 log.go:172] (0xc001e6a160) (0xc001d841e0) Stream removed, broadcasting: 1
I0126 11:22:28.291235       8 log.go:172] (0xc001e6a160) Go away received
I0126 11:22:28.291303       8 log.go:172] (0xc001e6a160) (0xc001d841e0) Stream removed, broadcasting: 1
I0126 11:22:28.291315       8 log.go:172] (0xc001e6a160) (0xc000a22000) Stream removed, broadcasting: 3
I0126 11:22:28.291323       8 log.go:172] (0xc001e6a160) (0xc001a16000) Stream removed, broadcasting: 5
Jan 26 11:22:28.291: INFO: Exec stderr: ""
Jan 26 11:22:28.291: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:28.291: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:28.357846       8 log.go:172] (0xc000b0b600) (0xc001a16280) Create stream
I0126 11:22:28.358114       8 log.go:172] (0xc000b0b600) (0xc001a16280) Stream added, broadcasting: 1
I0126 11:22:28.363390       8 log.go:172] (0xc000b0b600) Reply frame received for 1
I0126 11:22:28.363456       8 log.go:172] (0xc000b0b600) (0xc000339400) Create stream
I0126 11:22:28.363480       8 log.go:172] (0xc000b0b600) (0xc000339400) Stream added, broadcasting: 3
I0126 11:22:28.364583       8 log.go:172] (0xc000b0b600) Reply frame received for 3
I0126 11:22:28.364605       8 log.go:172] (0xc000b0b600) (0xc0003652c0) Create stream
I0126 11:22:28.364614       8 log.go:172] (0xc000b0b600) (0xc0003652c0) Stream added, broadcasting: 5
I0126 11:22:28.368153       8 log.go:172] (0xc000b0b600) Reply frame received for 5
I0126 11:22:28.507876       8 log.go:172] (0xc000b0b600) Data frame received for 3
I0126 11:22:28.507959       8 log.go:172] (0xc000339400) (3) Data frame handling
I0126 11:22:28.507988       8 log.go:172] (0xc000339400) (3) Data frame sent
I0126 11:22:28.723724       8 log.go:172] (0xc000b0b600) Data frame received for 1
I0126 11:22:28.723806       8 log.go:172] (0xc000b0b600) (0xc000339400) Stream removed, broadcasting: 3
I0126 11:22:28.723861       8 log.go:172] (0xc001a16280) (1) Data frame handling
I0126 11:22:28.723878       8 log.go:172] (0xc001a16280) (1) Data frame sent
I0126 11:22:28.723909       8 log.go:172] (0xc000b0b600) (0xc0003652c0) Stream removed, broadcasting: 5
I0126 11:22:28.723936       8 log.go:172] (0xc000b0b600) (0xc001a16280) Stream removed, broadcasting: 1
I0126 11:22:28.723958       8 log.go:172] (0xc000b0b600) Go away received
I0126 11:22:28.724096       8 log.go:172] (0xc000b0b600) (0xc001a16280) Stream removed, broadcasting: 1
I0126 11:22:28.724115       8 log.go:172] (0xc000b0b600) (0xc000339400) Stream removed, broadcasting: 3
I0126 11:22:28.724130       8 log.go:172] (0xc000b0b600) (0xc0003652c0) Stream removed, broadcasting: 5
Jan 26 11:22:28.724: INFO: Exec stderr: ""
Jan 26 11:22:28.724: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:28.724: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:28.784399       8 log.go:172] (0xc001d4e370) (0xc000194820) Create stream
I0126 11:22:28.784480       8 log.go:172] (0xc001d4e370) (0xc000194820) Stream added, broadcasting: 1
I0126 11:22:28.787269       8 log.go:172] (0xc001d4e370) Reply frame received for 1
I0126 11:22:28.787296       8 log.go:172] (0xc001d4e370) (0xc001a16320) Create stream
I0126 11:22:28.787306       8 log.go:172] (0xc001d4e370) (0xc001a16320) Stream added, broadcasting: 3
I0126 11:22:28.788235       8 log.go:172] (0xc001d4e370) Reply frame received for 3
I0126 11:22:28.788265       8 log.go:172] (0xc001d4e370) (0xc000365a40) Create stream
I0126 11:22:28.788278       8 log.go:172] (0xc001d4e370) (0xc000365a40) Stream added, broadcasting: 5
I0126 11:22:28.789052       8 log.go:172] (0xc001d4e370) Reply frame received for 5
I0126 11:22:28.883162       8 log.go:172] (0xc001d4e370) Data frame received for 3
I0126 11:22:28.883223       8 log.go:172] (0xc001a16320) (3) Data frame handling
I0126 11:22:28.883252       8 log.go:172] (0xc001a16320) (3) Data frame sent
I0126 11:22:29.022572       8 log.go:172] (0xc001d4e370) (0xc001a16320) Stream removed, broadcasting: 3
I0126 11:22:29.022778       8 log.go:172] (0xc001d4e370) Data frame received for 1
I0126 11:22:29.022809       8 log.go:172] (0xc000194820) (1) Data frame handling
I0126 11:22:29.022834       8 log.go:172] (0xc000194820) (1) Data frame sent
I0126 11:22:29.022857       8 log.go:172] (0xc001d4e370) (0xc000194820) Stream removed, broadcasting: 1
I0126 11:22:29.023040       8 log.go:172] (0xc001d4e370) (0xc000365a40) Stream removed, broadcasting: 5
I0126 11:22:29.023138       8 log.go:172] (0xc001d4e370) Go away received
I0126 11:22:29.023269       8 log.go:172] (0xc001d4e370) (0xc000194820) Stream removed, broadcasting: 1
I0126 11:22:29.023325       8 log.go:172] (0xc001d4e370) (0xc001a16320) Stream removed, broadcasting: 3
I0126 11:22:29.023361       8 log.go:172] (0xc001d4e370) (0xc000365a40) Stream removed, broadcasting: 5
Jan 26 11:22:29.023: INFO: Exec stderr: ""
Jan 26 11:22:29.023: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-glqhf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:22:29.023: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:22:29.091849       8 log.go:172] (0xc001d4e840) (0xc00025a320) Create stream
I0126 11:22:29.091895       8 log.go:172] (0xc001d4e840) (0xc00025a320) Stream added, broadcasting: 1
I0126 11:22:29.097205       8 log.go:172] (0xc001d4e840) Reply frame received for 1
I0126 11:22:29.097229       8 log.go:172] (0xc001d4e840) (0xc001a165a0) Create stream
I0126 11:22:29.097235       8 log.go:172] (0xc001d4e840) (0xc001a165a0) Stream added, broadcasting: 3
I0126 11:22:29.098526       8 log.go:172] (0xc001d4e840) Reply frame received for 3
I0126 11:22:29.098591       8 log.go:172] (0xc001d4e840) (0xc001a16640) Create stream
I0126 11:22:29.098612       8 log.go:172] (0xc001d4e840) (0xc001a16640) Stream added, broadcasting: 5
I0126 11:22:29.099788       8 log.go:172] (0xc001d4e840) Reply frame received for 5
I0126 11:22:29.212970       8 log.go:172] (0xc001d4e840) Data frame received for 3
I0126 11:22:29.213020       8 log.go:172] (0xc001a165a0) (3) Data frame handling
I0126 11:22:29.213038       8 log.go:172] (0xc001a165a0) (3) Data frame sent
I0126 11:22:29.304323       8 log.go:172] (0xc001d4e840) Data frame received for 1
I0126 11:22:29.304381       8 log.go:172] (0xc00025a320) (1) Data frame handling
I0126 11:22:29.304398       8 log.go:172] (0xc00025a320) (1) Data frame sent
I0126 11:22:29.304664       8 log.go:172] (0xc001d4e840) (0xc00025a320) Stream removed, broadcasting: 1
I0126 11:22:29.304997       8 log.go:172] (0xc001d4e840) (0xc001a165a0) Stream removed, broadcasting: 3
I0126 11:22:29.305165       8 log.go:172] (0xc001d4e840) (0xc001a16640) Stream removed, broadcasting: 5
I0126 11:22:29.305193       8 log.go:172] (0xc001d4e840) (0xc00025a320) Stream removed, broadcasting: 1
I0126 11:22:29.305202       8 log.go:172] (0xc001d4e840) (0xc001a165a0) Stream removed, broadcasting: 3
I0126 11:22:29.305211       8 log.go:172] (0xc001d4e840) (0xc001a16640) Stream removed, broadcasting: 5
I0126 11:22:29.305429       8 log.go:172] (0xc001d4e840) Go away received
Jan 26 11:22:29.305: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:22:29.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-glqhf" for this suite.
Jan 26 11:23:23.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:23:23.449: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-glqhf, resource: bindings, ignored listing per whitelist
Jan 26 11:23:23.515: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-glqhf deletion completed in 54.197830911s

• [SLOW TEST:82.604 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:23:23.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 26 11:23:23.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:25.994: INFO: stderr: ""
Jan 26 11:23:25.994: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 11:23:25.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:26.159: INFO: stderr: ""
Jan 26 11:23:26.159: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jan 26 11:23:31.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:31.311: INFO: stderr: ""
Jan 26 11:23:31.311: INFO: stdout: "update-demo-nautilus-5j75b update-demo-nautilus-bvcj9 "
Jan 26 11:23:31.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j75b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:31.435: INFO: stderr: ""
Jan 26 11:23:31.435: INFO: stdout: ""
Jan 26 11:23:31.435: INFO: update-demo-nautilus-5j75b is created but not running
Jan 26 11:23:36.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:36.734: INFO: stderr: ""
Jan 26 11:23:36.734: INFO: stdout: "update-demo-nautilus-5j75b update-demo-nautilus-bvcj9 "
Jan 26 11:23:36.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j75b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:36.908: INFO: stderr: ""
Jan 26 11:23:36.908: INFO: stdout: ""
Jan 26 11:23:36.909: INFO: update-demo-nautilus-5j75b is created but not running
Jan 26 11:23:41.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:42.122: INFO: stderr: ""
Jan 26 11:23:42.122: INFO: stdout: "update-demo-nautilus-5j75b update-demo-nautilus-bvcj9 "
Jan 26 11:23:42.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j75b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:42.291: INFO: stderr: ""
Jan 26 11:23:42.291: INFO: stdout: "true"
Jan 26 11:23:42.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j75b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:42.475: INFO: stderr: ""
Jan 26 11:23:42.475: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:23:42.475: INFO: validating pod update-demo-nautilus-5j75b
Jan 26 11:23:42.523: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:23:42.523: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:23:42.523: INFO: update-demo-nautilus-5j75b is verified up and running
Jan 26 11:23:42.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvcj9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:42.625: INFO: stderr: ""
Jan 26 11:23:42.625: INFO: stdout: "true"
Jan 26 11:23:42.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvcj9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:42.712: INFO: stderr: ""
Jan 26 11:23:42.712: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:23:42.712: INFO: validating pod update-demo-nautilus-bvcj9
Jan 26 11:23:42.735: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:23:42.735: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:23:42.736: INFO: update-demo-nautilus-bvcj9 is verified up and running
STEP: using delete to clean up resources
Jan 26 11:23:42.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:42.873: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:23:42.874: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 26 11:23:42.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-rh9v9'
Jan 26 11:23:43.043: INFO: stderr: "No resources found.\n"
Jan 26 11:23:43.043: INFO: stdout: ""
Jan 26 11:23:43.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-rh9v9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 26 11:23:43.218: INFO: stderr: ""
Jan 26 11:23:43.219: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:23:43.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rh9v9" for this suite.
Jan 26 11:24:07.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:24:07.361: INFO: namespace: e2e-tests-kubectl-rh9v9, resource: bindings, ignored listing per whitelist
Jan 26 11:24:07.456: INFO: namespace e2e-tests-kubectl-rh9v9 deletion completed in 24.221248761s

• [SLOW TEST:43.941 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:24:07.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-5dede760-402e-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 11:24:07.764: INFO: Waiting up to 5m0s for pod "pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-s7vhp" to be "success or failure"
Jan 26 11:24:07.810: INFO: Pod "pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.596789ms
Jan 26 11:24:10.517: INFO: Pod "pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752257561s
Jan 26 11:24:12.561: INFO: Pod "pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.796549543s
Jan 26 11:24:14.588: INFO: Pod "pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.823601767s
Jan 26 11:24:16.701: INFO: Pod "pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.936463836s
Jan 26 11:24:18.715: INFO: Pod "pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.950671879s
STEP: Saw pod success
Jan 26 11:24:18.715: INFO: Pod "pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:24:18.720: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 26 11:24:18.968: INFO: Waiting for pod pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:24:18.981: INFO: Pod pod-configmaps-5df73afd-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:24:18.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-s7vhp" for this suite.
Jan 26 11:24:25.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:24:25.168: INFO: namespace: e2e-tests-configmap-s7vhp, resource: bindings, ignored listing per whitelist
Jan 26 11:24:25.230: INFO: namespace e2e-tests-configmap-s7vhp deletion completed in 6.238821697s

• [SLOW TEST:17.773 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:24:25.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 26 11:24:25.482: INFO: Waiting up to 5m0s for pod "downward-api-688684d0-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-n57rs" to be "success or failure"
Jan 26 11:24:25.513: INFO: Pod "downward-api-688684d0-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.476318ms
Jan 26 11:24:27.774: INFO: Pod "downward-api-688684d0-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292152494s
Jan 26 11:24:29.820: INFO: Pod "downward-api-688684d0-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337752502s
Jan 26 11:24:31.906: INFO: Pod "downward-api-688684d0-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424692063s
Jan 26 11:24:34.099: INFO: Pod "downward-api-688684d0-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.616728529s
Jan 26 11:24:36.200: INFO: Pod "downward-api-688684d0-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.718103865s
STEP: Saw pod success
Jan 26 11:24:36.200: INFO: Pod "downward-api-688684d0-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:24:36.477: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-688684d0-402e-11ea-b664-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 26 11:24:36.682: INFO: Waiting for pod downward-api-688684d0-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:24:36.753: INFO: Pod downward-api-688684d0-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:24:36.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-n57rs" for this suite.
Jan 26 11:24:42.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:24:42.931: INFO: namespace: e2e-tests-downward-api-n57rs, resource: bindings, ignored listing per whitelist
Jan 26 11:24:42.934: INFO: namespace e2e-tests-downward-api-n57rs deletion completed in 6.166529575s

• [SLOW TEST:17.704 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:24:42.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:24:43.095: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.345309ms)
Jan 26 11:24:43.104: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.78044ms)
Jan 26 11:24:43.109: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.391444ms)
Jan 26 11:24:43.115: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.344516ms)
Jan 26 11:24:43.120: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.459433ms)
Jan 26 11:24:43.125: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.981889ms)
Jan 26 11:24:43.130: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.911468ms)
Jan 26 11:24:43.136: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.097874ms)
Jan 26 11:24:43.141: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.630824ms)
Jan 26 11:24:43.147: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.543072ms)
Jan 26 11:24:43.151: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.583605ms)
Jan 26 11:24:43.155: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.198815ms)
Jan 26 11:24:43.159: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.904823ms)
Jan 26 11:24:43.163: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.030981ms)
Jan 26 11:24:43.169: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.351528ms)
Jan 26 11:24:43.173: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.608081ms)
Jan 26 11:24:43.178: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.072576ms)
Jan 26 11:24:43.183: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.641147ms)
Jan 26 11:24:43.187: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.594291ms)
Jan 26 11:24:43.191: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.090796ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:24:43.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-f8vzn" for this suite.
Jan 26 11:24:49.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:24:49.387: INFO: namespace: e2e-tests-proxy-f8vzn, resource: bindings, ignored listing per whitelist
Jan 26 11:24:49.409: INFO: namespace e2e-tests-proxy-f8vzn deletion completed in 6.213940591s

• [SLOW TEST:6.475 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:24:49.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 26 11:24:49.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 26 11:24:49.720: INFO: stderr: ""
Jan 26 11:24:49.720: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:24:49.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7jd6j" for this suite.
Jan 26 11:24:55.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:24:56.570: INFO: namespace: e2e-tests-kubectl-7jd6j, resource: bindings, ignored listing per whitelist
Jan 26 11:24:56.591: INFO: namespace e2e-tests-kubectl-7jd6j deletion completed in 6.860109295s

• [SLOW TEST:7.182 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:24:56.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-7b248cae-402e-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 11:24:56.827: INFO: Waiting up to 5m0s for pod "pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-h5tnh" to be "success or failure"
Jan 26 11:24:56.839: INFO: Pod "pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.586726ms
Jan 26 11:24:58.868: INFO: Pod "pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041656119s
Jan 26 11:25:00.913: INFO: Pod "pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085970849s
Jan 26 11:25:03.130: INFO: Pod "pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303166586s
Jan 26 11:25:05.144: INFO: Pod "pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317438469s
Jan 26 11:25:07.198: INFO: Pod "pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.370792161s
STEP: Saw pod success
Jan 26 11:25:07.198: INFO: Pod "pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:25:07.211: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 26 11:25:07.419: INFO: Waiting for pod pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:25:07.426: INFO: Pod pod-configmaps-7b2569ec-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:25:07.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h5tnh" for this suite.
Jan 26 11:25:13.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:25:13.658: INFO: namespace: e2e-tests-configmap-h5tnh, resource: bindings, ignored listing per whitelist
Jan 26 11:25:13.725: INFO: namespace e2e-tests-configmap-h5tnh deletion completed in 6.291589965s

• [SLOW TEST:17.134 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:25:13.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 26 11:25:14.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wnwhn'
Jan 26 11:25:14.649: INFO: stderr: ""
Jan 26 11:25:14.649: INFO: stdout: "pod/pause created\n"
Jan 26 11:25:14.649: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 26 11:25:14.649: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-wnwhn" to be "running and ready"
Jan 26 11:25:14.661: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.190534ms
Jan 26 11:25:16.986: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336397532s
Jan 26 11:25:19.007: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35808806s
Jan 26 11:25:21.171: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521443234s
Jan 26 11:25:23.190: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541034136s
Jan 26 11:25:25.205: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.555404082s
Jan 26 11:25:25.205: INFO: Pod "pause" satisfied condition "running and ready"
Jan 26 11:25:25.205: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 26 11:25:25.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-wnwhn'
Jan 26 11:25:25.416: INFO: stderr: ""
Jan 26 11:25:25.416: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 26 11:25:25.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-wnwhn'
Jan 26 11:25:25.537: INFO: stderr: ""
Jan 26 11:25:25.538: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 26 11:25:25.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-wnwhn'
Jan 26 11:25:25.671: INFO: stderr: ""
Jan 26 11:25:25.671: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 26 11:25:25.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-wnwhn'
Jan 26 11:25:25.831: INFO: stderr: ""
Jan 26 11:25:25.831: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 26 11:25:25.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wnwhn'
Jan 26 11:25:25.999: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:25:25.999: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 26 11:25:25.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-wnwhn'
Jan 26 11:25:26.241: INFO: stderr: "No resources found.\n"
Jan 26 11:25:26.241: INFO: stdout: ""
Jan 26 11:25:26.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-wnwhn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 26 11:25:26.358: INFO: stderr: ""
Jan 26 11:25:26.358: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:25:26.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wnwhn" for this suite.
Jan 26 11:25:34.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:25:34.484: INFO: namespace: e2e-tests-kubectl-wnwhn, resource: bindings, ignored listing per whitelist
Jan 26 11:25:34.746: INFO: namespace e2e-tests-kubectl-wnwhn deletion completed in 8.37558385s

• [SLOW TEST:21.021 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:25:34.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:25:41.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-v2r7x" for this suite.
Jan 26 11:25:47.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:25:47.639: INFO: namespace: e2e-tests-namespaces-v2r7x, resource: bindings, ignored listing per whitelist
Jan 26 11:25:47.676: INFO: namespace e2e-tests-namespaces-v2r7x deletion completed in 6.182149517s
STEP: Destroying namespace "e2e-tests-nsdeletetest-npdhc" for this suite.
Jan 26 11:25:47.680: INFO: Namespace e2e-tests-nsdeletetest-npdhc was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-27znr" for this suite.
Jan 26 11:25:53.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:25:53.856: INFO: namespace: e2e-tests-nsdeletetest-27znr, resource: bindings, ignored listing per whitelist
Jan 26 11:25:53.927: INFO: namespace e2e-tests-nsdeletetest-27znr deletion completed in 6.246406617s

• [SLOW TEST:19.181 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:25:53.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 11:25:54.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xkj2x'
Jan 26 11:25:54.415: INFO: stderr: ""
Jan 26 11:25:54.415: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan 26 11:25:54.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xkj2x'
Jan 26 11:26:00.689: INFO: stderr: ""
Jan 26 11:26:00.689: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:26:00.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xkj2x" for this suite.
Jan 26 11:26:06.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:26:06.898: INFO: namespace: e2e-tests-kubectl-xkj2x, resource: bindings, ignored listing per whitelist
Jan 26 11:26:06.898: INFO: namespace e2e-tests-kubectl-xkj2x deletion completed in 6.199412983s

• [SLOW TEST:12.970 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:26:06.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:26:07.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-llnst" to be "success or failure"
Jan 26 11:26:07.116: INFO: Pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.798454ms
Jan 26 11:26:09.128: INFO: Pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029359941s
Jan 26 11:26:11.148: INFO: Pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049705817s
Jan 26 11:26:13.168: INFO: Pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069794116s
Jan 26 11:26:15.187: INFO: Pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088359532s
Jan 26 11:26:17.201: INFO: Pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.102499164s
Jan 26 11:26:19.215: INFO: Pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.117129408s
STEP: Saw pod success
Jan 26 11:26:19.215: INFO: Pod "downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:26:19.222: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:26:19.880: INFO: Waiting for pod downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:26:19.899: INFO: Pod downwardapi-volume-a51a2615-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:26:19.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-llnst" for this suite.
Jan 26 11:26:25.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:26:26.013: INFO: namespace: e2e-tests-downward-api-llnst, resource: bindings, ignored listing per whitelist
Jan 26 11:26:26.213: INFO: namespace e2e-tests-downward-api-llnst deletion completed in 6.308796161s

• [SLOW TEST:19.315 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:26:26.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:26:26.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-bv5xf" to be "success or failure"
Jan 26 11:26:26.390: INFO: Pod "downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.324792ms
Jan 26 11:26:28.473: INFO: Pod "downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096259634s
Jan 26 11:26:30.494: INFO: Pod "downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118110877s
Jan 26 11:26:32.543: INFO: Pod "downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166397811s
Jan 26 11:26:34.590: INFO: Pod "downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.213472071s
STEP: Saw pod success
Jan 26 11:26:34.590: INFO: Pod "downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:26:34.673: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:26:34.831: INFO: Waiting for pod downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:26:34.837: INFO: Pod downwardapi-volume-b097e897-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:26:34.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bv5xf" for this suite.
Jan 26 11:26:40.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:26:41.122: INFO: namespace: e2e-tests-downward-api-bv5xf, resource: bindings, ignored listing per whitelist
Jan 26 11:26:41.130: INFO: namespace e2e-tests-downward-api-bv5xf deletion completed in 6.286320352s

• [SLOW TEST:14.916 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:26:41.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-9pmx5/configmap-test-b992c0b6-402e-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 11:26:41.459: INFO: Waiting up to 5m0s for pod "pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-9pmx5" to be "success or failure"
Jan 26 11:26:41.615: INFO: Pod "pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 155.948945ms
Jan 26 11:26:43.629: INFO: Pod "pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170381818s
Jan 26 11:26:45.651: INFO: Pod "pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192188249s
Jan 26 11:26:47.664: INFO: Pod "pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205252841s
Jan 26 11:26:49.677: INFO: Pod "pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217609035s
Jan 26 11:26:51.686: INFO: Pod "pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.22720344s
STEP: Saw pod success
Jan 26 11:26:51.686: INFO: Pod "pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:26:51.691: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005 container env-test: 
STEP: delete the pod
Jan 26 11:26:52.412: INFO: Waiting for pod pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:26:52.612: INFO: Pod pod-configmaps-b99468f2-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:26:52.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9pmx5" for this suite.
Jan 26 11:26:58.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:26:58.955: INFO: namespace: e2e-tests-configmap-9pmx5, resource: bindings, ignored listing per whitelist
Jan 26 11:26:58.979: INFO: namespace e2e-tests-configmap-9pmx5 deletion completed in 6.331173079s

• [SLOW TEST:17.849 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:26:58.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:26:59.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-clnsk" to be "success or failure"
Jan 26 11:26:59.385: INFO: Pod "downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.980242ms
Jan 26 11:27:01.397: INFO: Pod "downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049170216s
Jan 26 11:27:03.417: INFO: Pod "downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069708643s
Jan 26 11:27:05.440: INFO: Pod "downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092642474s
Jan 26 11:27:07.816: INFO: Pod "downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.468532628s
Jan 26 11:27:09.836: INFO: Pod "downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.488611534s
STEP: Saw pod success
Jan 26 11:27:09.836: INFO: Pod "downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:27:09.842: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:27:10.576: INFO: Waiting for pod downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:27:11.027: INFO: Pod downwardapi-volume-c434c472-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:27:11.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-clnsk" for this suite.
Jan 26 11:27:17.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:27:17.397: INFO: namespace: e2e-tests-projected-clnsk, resource: bindings, ignored listing per whitelist
Jan 26 11:27:17.437: INFO: namespace e2e-tests-projected-clnsk deletion completed in 6.385995286s

• [SLOW TEST:18.454 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:27:17.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 26 11:27:17.703: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:27:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-pdvj7" for this suite.
Jan 26 11:28:04.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:28:04.551: INFO: namespace: e2e-tests-init-container-pdvj7, resource: bindings, ignored listing per whitelist
Jan 26 11:28:04.569: INFO: namespace e2e-tests-init-container-pdvj7 deletion completed in 24.356155235s

• [SLOW TEST:47.132 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:28:04.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-eb470130-402e-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 11:28:04.940: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-z772g" to be "success or failure"
Jan 26 11:28:04.954: INFO: Pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.158543ms
Jan 26 11:28:06.970: INFO: Pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029203489s
Jan 26 11:28:08.986: INFO: Pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045915938s
Jan 26 11:28:10.999: INFO: Pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058332212s
Jan 26 11:28:13.005: INFO: Pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064067444s
Jan 26 11:28:15.020: INFO: Pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07959626s
Jan 26 11:28:17.028: INFO: Pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.087271696s
STEP: Saw pod success
Jan 26 11:28:17.028: INFO: Pod "pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:28:17.031: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 11:28:17.558: INFO: Waiting for pod pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:28:17.797: INFO: Pod pod-projected-secrets-eb492a5c-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:28:17.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z772g" for this suite.
Jan 26 11:28:23.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:28:23.998: INFO: namespace: e2e-tests-projected-z772g, resource: bindings, ignored listing per whitelist
Jan 26 11:28:24.150: INFO: namespace e2e-tests-projected-z772g deletion completed in 6.32468741s

• [SLOW TEST:19.581 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:28:24.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 26 11:28:24.332: INFO: Waiting up to 5m0s for pod "pod-f6dd48f7-402e-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-4n4l4" to be "success or failure"
Jan 26 11:28:24.345: INFO: Pod "pod-f6dd48f7-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.61804ms
Jan 26 11:28:26.356: INFO: Pod "pod-f6dd48f7-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023798377s
Jan 26 11:28:28.370: INFO: Pod "pod-f6dd48f7-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038161919s
Jan 26 11:28:30.783: INFO: Pod "pod-f6dd48f7-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451261951s
Jan 26 11:28:32.811: INFO: Pod "pod-f6dd48f7-402e-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.479066279s
Jan 26 11:28:35.290: INFO: Pod "pod-f6dd48f7-402e-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.958122116s
STEP: Saw pod success
Jan 26 11:28:35.290: INFO: Pod "pod-f6dd48f7-402e-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:28:35.313: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f6dd48f7-402e-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:28:35.981: INFO: Waiting for pod pod-f6dd48f7-402e-11ea-b664-0242ac110005 to disappear
Jan 26 11:28:35.990: INFO: Pod pod-f6dd48f7-402e-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:28:35.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4n4l4" for this suite.
Jan 26 11:28:42.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:28:42.318: INFO: namespace: e2e-tests-emptydir-4n4l4, resource: bindings, ignored listing per whitelist
Jan 26 11:28:42.360: INFO: namespace e2e-tests-emptydir-4n4l4 deletion completed in 6.362938792s

• [SLOW TEST:18.210 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:28:42.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan 26 11:28:42.633: INFO: Waiting up to 5m0s for pod "client-containers-01cf8b3a-402f-11ea-b664-0242ac110005" in namespace "e2e-tests-containers-55nql" to be "success or failure"
Jan 26 11:28:42.709: INFO: Pod "client-containers-01cf8b3a-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 75.77311ms
Jan 26 11:28:44.735: INFO: Pod "client-containers-01cf8b3a-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101781516s
Jan 26 11:28:46.752: INFO: Pod "client-containers-01cf8b3a-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118812065s
Jan 26 11:28:48.779: INFO: Pod "client-containers-01cf8b3a-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145461364s
Jan 26 11:28:50.799: INFO: Pod "client-containers-01cf8b3a-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1653965s
Jan 26 11:28:52.850: INFO: Pod "client-containers-01cf8b3a-402f-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.216474478s
STEP: Saw pod success
Jan 26 11:28:52.850: INFO: Pod "client-containers-01cf8b3a-402f-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:28:52.875: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-01cf8b3a-402f-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:28:52.966: INFO: Waiting for pod client-containers-01cf8b3a-402f-11ea-b664-0242ac110005 to disappear
Jan 26 11:28:53.001: INFO: Pod client-containers-01cf8b3a-402f-11ea-b664-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:28:53.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-55nql" for this suite.
Jan 26 11:29:01.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:29:01.203: INFO: namespace: e2e-tests-containers-55nql, resource: bindings, ignored listing per whitelist
Jan 26 11:29:01.346: INFO: namespace e2e-tests-containers-55nql deletion completed in 8.332512049s

• [SLOW TEST:18.986 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:29:01.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0126 11:29:11.834466       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 11:29:11.834: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:29:11.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-59qmc" for this suite.
Jan 26 11:29:17.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:29:17.945: INFO: namespace: e2e-tests-gc-59qmc, resource: bindings, ignored listing per whitelist
Jan 26 11:29:18.036: INFO: namespace e2e-tests-gc-59qmc deletion completed in 6.196354209s

• [SLOW TEST:16.690 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:29:18.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-17092968-402f-11ea-b664-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-170929dc-402f-11ea-b664-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-17092968-402f-11ea-b664-0242ac110005
STEP: Updating configmap cm-test-opt-upd-170929dc-402f-11ea-b664-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-17092a0a-402f-11ea-b664-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:29:36.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-n5x2b" for this suite.
Jan 26 11:30:00.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:30:01.046: INFO: namespace: e2e-tests-configmap-n5x2b, resource: bindings, ignored listing per whitelist
Jan 26 11:30:01.199: INFO: namespace e2e-tests-configmap-n5x2b deletion completed in 24.243693872s

• [SLOW TEST:43.162 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:30:01.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 26 11:30:01.642: INFO: Waiting up to 5m0s for pod "pod-30e600ce-402f-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-6rwkv" to be "success or failure"
Jan 26 11:30:01.745: INFO: Pod "pod-30e600ce-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.884116ms
Jan 26 11:30:03.987: INFO: Pod "pod-30e600ce-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344690652s
Jan 26 11:30:06.001: INFO: Pod "pod-30e600ce-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358941041s
Jan 26 11:30:08.018: INFO: Pod "pod-30e600ce-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375894136s
Jan 26 11:30:10.026: INFO: Pod "pod-30e600ce-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.383974562s
Jan 26 11:30:12.102: INFO: Pod "pod-30e600ce-402f-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.45941434s
STEP: Saw pod success
Jan 26 11:30:12.102: INFO: Pod "pod-30e600ce-402f-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:30:12.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-30e600ce-402f-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:30:12.371: INFO: Waiting for pod pod-30e600ce-402f-11ea-b664-0242ac110005 to disappear
Jan 26 11:30:12.381: INFO: Pod pod-30e600ce-402f-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:30:12.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6rwkv" for this suite.
Jan 26 11:30:18.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:30:18.679: INFO: namespace: e2e-tests-emptydir-6rwkv, resource: bindings, ignored listing per whitelist
Jan 26 11:30:18.765: INFO: namespace e2e-tests-emptydir-6rwkv deletion completed in 6.378765483s

• [SLOW TEST:17.564 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:30:18.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:30:19.024: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 26 11:30:24.389: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 26 11:30:28.415: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 26 11:30:28.557: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-9vxg6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9vxg6/deployments/test-cleanup-deployment,UID:40e0e0bd-402f-11ea-a994-fa163e34d433,ResourceVersion:19515527,Generation:1,CreationTimestamp:2020-01-26 11:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 26 11:30:28.644: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:30:28.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9vxg6" for this suite.
Jan 26 11:30:36.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:30:37.021: INFO: namespace: e2e-tests-deployment-9vxg6, resource: bindings, ignored listing per whitelist
Jan 26 11:30:37.160: INFO: namespace e2e-tests-deployment-9vxg6 deletion completed in 8.438697441s

• [SLOW TEST:18.394 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:30:37.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-dg7xz
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-dg7xz
STEP: Deleting pre-stop pod
Jan 26 11:31:02.606: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:31:02.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-dg7xz" for this suite.
Jan 26 11:31:42.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:31:42.776: INFO: namespace: e2e-tests-prestop-dg7xz, resource: bindings, ignored listing per whitelist
Jan 26 11:31:42.831: INFO: namespace e2e-tests-prestop-dg7xz deletion completed in 40.119014851s

• [SLOW TEST:65.670 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:31:42.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 26 11:31:43.065: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 26 11:31:43.151: INFO: Waiting for terminating namespaces to be deleted...
Jan 26 11:31:43.156: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 26 11:31:43.171: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 11:31:43.171: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 26 11:31:43.171: INFO: 	Container coredns ready: true, restart count 0
Jan 26 11:31:43.171: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 26 11:31:43.171: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 11:31:43.171: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 11:31:43.171: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 26 11:31:43.171: INFO: 	Container weave ready: true, restart count 0
Jan 26 11:31:43.171: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 11:31:43.171: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 26 11:31:43.171: INFO: 	Container coredns ready: true, restart count 0
Jan 26 11:31:43.171: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 11:31:43.171: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 26 11:31:43.365: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 26 11:31:43.365: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 26 11:31:43.365: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 26 11:31:43.365: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 26 11:31:43.365: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 26 11:31:43.365: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 26 11:31:43.365: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 26 11:31:43.365: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6d8a839a-402f-11ea-b664-0242ac110005.15ed6c7735daaded], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-z8vt6/filler-pod-6d8a839a-402f-11ea-b664-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6d8a839a-402f-11ea-b664-0242ac110005.15ed6c7836900c87], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6d8a839a-402f-11ea-b664-0242ac110005.15ed6c78b4826b2e], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6d8a839a-402f-11ea-b664-0242ac110005.15ed6c78e3fb90dc], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ed6c79150e83fa], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:31:52.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-z8vt6" for this suite.
Jan 26 11:32:00.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:32:01.032: INFO: namespace: e2e-tests-sched-pred-z8vt6, resource: bindings, ignored listing per whitelist
Jan 26 11:32:01.101: INFO: namespace e2e-tests-sched-pred-z8vt6 deletion completed in 8.419760622s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:18.270 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:32:01.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-784813f5-402f-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 11:32:01.419: INFO: Waiting up to 5m0s for pod "pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-46ttr" to be "success or failure"
Jan 26 11:32:01.557: INFO: Pod "pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 137.967228ms
Jan 26 11:32:03.883: INFO: Pod "pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464080862s
Jan 26 11:32:05.895: INFO: Pod "pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47603113s
Jan 26 11:32:08.396: INFO: Pod "pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.976988269s
Jan 26 11:32:10.456: INFO: Pod "pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.036775681s
Jan 26 11:32:12.492: INFO: Pod "pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.072814588s
STEP: Saw pod success
Jan 26 11:32:12.492: INFO: Pod "pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:32:12.524: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 26 11:32:13.247: INFO: Waiting for pod pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005 to disappear
Jan 26 11:32:13.255: INFO: Pod pod-secrets-784ad1c1-402f-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:32:13.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-46ttr" for this suite.
Jan 26 11:32:19.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:32:19.462: INFO: namespace: e2e-tests-secrets-46ttr, resource: bindings, ignored listing per whitelist
Jan 26 11:32:19.559: INFO: namespace e2e-tests-secrets-46ttr deletion completed in 6.295524645s

• [SLOW TEST:18.458 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:32:19.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:32:19.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-pqfqv" to be "success or failure"
Jan 26 11:32:19.778: INFO: Pod "downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.568871ms
Jan 26 11:32:21.809: INFO: Pod "downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045260222s
Jan 26 11:32:23.842: INFO: Pod "downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078020195s
Jan 26 11:32:25.871: INFO: Pod "downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107039198s
Jan 26 11:32:27.882: INFO: Pod "downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117434824s
Jan 26 11:32:30.169: INFO: Pod "downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.405164391s
STEP: Saw pod success
Jan 26 11:32:30.169: INFO: Pod "downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:32:30.177: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:32:30.536: INFO: Waiting for pod downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005 to disappear
Jan 26 11:32:30.727: INFO: Pod downwardapi-volume-8338f8e9-402f-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:32:30.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pqfqv" for this suite.
Jan 26 11:32:36.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:32:36.886: INFO: namespace: e2e-tests-projected-pqfqv, resource: bindings, ignored listing per whitelist
Jan 26 11:32:36.946: INFO: namespace e2e-tests-projected-pqfqv deletion completed in 6.208134718s

• [SLOW TEST:17.387 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:32:36.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 26 11:32:47.739: INFO: Successfully updated pod "annotationupdate8d957ba9-402f-11ea-b664-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:32:49.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p8j28" for this suite.
Jan 26 11:33:14.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:33:14.373: INFO: namespace: e2e-tests-projected-p8j28, resource: bindings, ignored listing per whitelist
Jan 26 11:33:14.373: INFO: namespace e2e-tests-projected-p8j28 deletion completed in 24.43413439s

• [SLOW TEST:37.427 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:33:14.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 26 11:33:14.676: INFO: Waiting up to 5m0s for pod "var-expansion-a3f291b7-402f-11ea-b664-0242ac110005" in namespace "e2e-tests-var-expansion-7gzgt" to be "success or failure"
Jan 26 11:33:14.687: INFO: Pod "var-expansion-a3f291b7-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.593353ms
Jan 26 11:33:16.721: INFO: Pod "var-expansion-a3f291b7-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044769415s
Jan 26 11:33:18.729: INFO: Pod "var-expansion-a3f291b7-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052859166s
Jan 26 11:33:20.740: INFO: Pod "var-expansion-a3f291b7-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063545618s
Jan 26 11:33:22.764: INFO: Pod "var-expansion-a3f291b7-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087674249s
Jan 26 11:33:25.047: INFO: Pod "var-expansion-a3f291b7-402f-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.370674566s
STEP: Saw pod success
Jan 26 11:33:25.047: INFO: Pod "var-expansion-a3f291b7-402f-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:33:25.056: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-a3f291b7-402f-11ea-b664-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 26 11:33:25.269: INFO: Waiting for pod var-expansion-a3f291b7-402f-11ea-b664-0242ac110005 to disappear
Jan 26 11:33:25.285: INFO: Pod var-expansion-a3f291b7-402f-11ea-b664-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:33:25.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-7gzgt" for this suite.
Jan 26 11:33:31.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:33:31.523: INFO: namespace: e2e-tests-var-expansion-7gzgt, resource: bindings, ignored listing per whitelist
Jan 26 11:33:31.556: INFO: namespace e2e-tests-var-expansion-7gzgt deletion completed in 6.260925324s

• [SLOW TEST:17.183 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:33:31.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ae32ec7b-402f-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 11:33:31.863: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-xgb9l" to be "success or failure"
Jan 26 11:33:31.885: INFO: Pod "pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.976505ms
Jan 26 11:33:33.898: INFO: Pod "pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035367611s
Jan 26 11:33:35.930: INFO: Pod "pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067566707s
Jan 26 11:33:37.949: INFO: Pod "pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08630319s
Jan 26 11:33:39.968: INFO: Pod "pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105501038s
Jan 26 11:33:42.344: INFO: Pod "pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.480656736s
STEP: Saw pod success
Jan 26 11:33:42.344: INFO: Pod "pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:33:42.350: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 11:33:42.778: INFO: Waiting for pod pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005 to disappear
Jan 26 11:33:42.799: INFO: Pod pod-projected-configmaps-ae33b905-402f-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:33:42.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xgb9l" for this suite.
Jan 26 11:33:48.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:33:48.894: INFO: namespace: e2e-tests-projected-xgb9l, resource: bindings, ignored listing per whitelist
Jan 26 11:33:48.979: INFO: namespace e2e-tests-projected-xgb9l deletion completed in 6.173847383s

• [SLOW TEST:17.423 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:33:48.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-n6wvw
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 26 11:33:49.228: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 26 11:34:21.581: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-n6wvw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:34:21.581: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:34:21.672758       8 log.go:172] (0xc00070bd90) (0xc000a221e0) Create stream
I0126 11:34:21.672826       8 log.go:172] (0xc00070bd90) (0xc000a221e0) Stream added, broadcasting: 1
I0126 11:34:21.679257       8 log.go:172] (0xc00070bd90) Reply frame received for 1
I0126 11:34:21.679303       8 log.go:172] (0xc00070bd90) (0xc001179900) Create stream
I0126 11:34:21.679313       8 log.go:172] (0xc00070bd90) (0xc001179900) Stream added, broadcasting: 3
I0126 11:34:21.681144       8 log.go:172] (0xc00070bd90) Reply frame received for 3
I0126 11:34:21.681178       8 log.go:172] (0xc00070bd90) (0xc001fa86e0) Create stream
I0126 11:34:21.681185       8 log.go:172] (0xc00070bd90) (0xc001fa86e0) Stream added, broadcasting: 5
I0126 11:34:21.682989       8 log.go:172] (0xc00070bd90) Reply frame received for 5
I0126 11:34:21.975750       8 log.go:172] (0xc00070bd90) Data frame received for 3
I0126 11:34:21.975876       8 log.go:172] (0xc001179900) (3) Data frame handling
I0126 11:34:21.975912       8 log.go:172] (0xc001179900) (3) Data frame sent
I0126 11:34:22.119268       8 log.go:172] (0xc00070bd90) Data frame received for 1
I0126 11:34:22.119421       8 log.go:172] (0xc000a221e0) (1) Data frame handling
I0126 11:34:22.119459       8 log.go:172] (0xc000a221e0) (1) Data frame sent
I0126 11:34:22.119598       8 log.go:172] (0xc00070bd90) (0xc001fa86e0) Stream removed, broadcasting: 5
I0126 11:34:22.119634       8 log.go:172] (0xc00070bd90) (0xc000a221e0) Stream removed, broadcasting: 1
I0126 11:34:22.119824       8 log.go:172] (0xc00070bd90) (0xc001179900) Stream removed, broadcasting: 3
I0126 11:34:22.119868       8 log.go:172] (0xc00070bd90) (0xc000a221e0) Stream removed, broadcasting: 1
I0126 11:34:22.119883       8 log.go:172] (0xc00070bd90) (0xc001179900) Stream removed, broadcasting: 3
I0126 11:34:22.119892       8 log.go:172] (0xc00070bd90) (0xc001fa86e0) Stream removed, broadcasting: 5
I0126 11:34:22.120174       8 log.go:172] (0xc00070bd90) Go away received
Jan 26 11:34:22.120: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:34:22.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-n6wvw" for this suite.
Jan 26 11:34:42.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:34:42.408: INFO: namespace: e2e-tests-pod-network-test-n6wvw, resource: bindings, ignored listing per whitelist
Jan 26 11:34:42.408: INFO: namespace e2e-tests-pod-network-test-n6wvw deletion completed in 20.257226761s

• [SLOW TEST:53.428 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:34:42.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dcd89
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 26 11:34:42.707: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 26 11:35:23.155: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-dcd89 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 11:35:23.155: INFO: >>> kubeConfig: /root/.kube/config
I0126 11:35:23.244356       8 log.go:172] (0xc001d4e370) (0xc000eae280) Create stream
I0126 11:35:23.244504       8 log.go:172] (0xc001d4e370) (0xc000eae280) Stream added, broadcasting: 1
I0126 11:35:23.252885       8 log.go:172] (0xc001d4e370) Reply frame received for 1
I0126 11:35:23.252938       8 log.go:172] (0xc001d4e370) (0xc001d85220) Create stream
I0126 11:35:23.252956       8 log.go:172] (0xc001d4e370) (0xc001d85220) Stream added, broadcasting: 3
I0126 11:35:23.255090       8 log.go:172] (0xc001d4e370) Reply frame received for 3
I0126 11:35:23.255127       8 log.go:172] (0xc001d4e370) (0xc001b4f720) Create stream
I0126 11:35:23.255139       8 log.go:172] (0xc001d4e370) (0xc001b4f720) Stream added, broadcasting: 5
I0126 11:35:23.256546       8 log.go:172] (0xc001d4e370) Reply frame received for 5
I0126 11:35:23.483536       8 log.go:172] (0xc001d4e370) Data frame received for 3
I0126 11:35:23.483662       8 log.go:172] (0xc001d85220) (3) Data frame handling
I0126 11:35:23.483755       8 log.go:172] (0xc001d85220) (3) Data frame sent
I0126 11:35:23.625458       8 log.go:172] (0xc001d4e370) (0xc001d85220) Stream removed, broadcasting: 3
I0126 11:35:23.625688       8 log.go:172] (0xc001d4e370) Data frame received for 1
I0126 11:35:23.625730       8 log.go:172] (0xc001d4e370) (0xc001b4f720) Stream removed, broadcasting: 5
I0126 11:35:23.625834       8 log.go:172] (0xc000eae280) (1) Data frame handling
I0126 11:35:23.625863       8 log.go:172] (0xc000eae280) (1) Data frame sent
I0126 11:35:23.625873       8 log.go:172] (0xc001d4e370) (0xc000eae280) Stream removed, broadcasting: 1
I0126 11:35:23.625888       8 log.go:172] (0xc001d4e370) Go away received
I0126 11:35:23.626120       8 log.go:172] (0xc001d4e370) (0xc000eae280) Stream removed, broadcasting: 1
I0126 11:35:23.626145       8 log.go:172] (0xc001d4e370) (0xc001d85220) Stream removed, broadcasting: 3
I0126 11:35:23.626163       8 log.go:172] (0xc001d4e370) (0xc001b4f720) Stream removed, broadcasting: 5
Jan 26 11:35:23.626: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:35:23.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-dcd89" for this suite.
Jan 26 11:35:49.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:35:49.824: INFO: namespace: e2e-tests-pod-network-test-dcd89, resource: bindings, ignored listing per whitelist
Jan 26 11:35:49.955: INFO: namespace e2e-tests-pod-network-test-dcd89 deletion completed in 26.312464889s

• [SLOW TEST:67.547 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:35:49.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:35:50.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-grfcs" to be "success or failure"
Jan 26 11:35:50.293: INFO: Pod "downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 73.156302ms
Jan 26 11:35:52.755: INFO: Pod "downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.535456952s
Jan 26 11:35:54.787: INFO: Pod "downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567109465s
Jan 26 11:35:56.797: INFO: Pod "downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576819724s
Jan 26 11:35:58.847: INFO: Pod "downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.627033549s
Jan 26 11:36:00.943: INFO: Pod "downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.722656412s
STEP: Saw pod success
Jan 26 11:36:00.943: INFO: Pod "downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:36:01.028: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:36:01.186: INFO: Waiting for pod downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005 to disappear
Jan 26 11:36:01.203: INFO: Pod downwardapi-volume-009b5583-4030-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:36:01.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-grfcs" for this suite.
Jan 26 11:36:07.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:36:07.504: INFO: namespace: e2e-tests-projected-grfcs, resource: bindings, ignored listing per whitelist
Jan 26 11:36:07.594: INFO: namespace e2e-tests-projected-grfcs deletion completed in 6.24209178s

• [SLOW TEST:17.639 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:36:07.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:36:07.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 26 11:36:07.949: INFO: stderr: ""
Jan 26 11:36:07.949: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:36:07.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jnwws" for this suite.
Jan 26 11:36:14.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:36:14.257: INFO: namespace: e2e-tests-kubectl-jnwws, resource: bindings, ignored listing per whitelist
Jan 26 11:36:14.269: INFO: namespace e2e-tests-kubectl-jnwws deletion completed in 6.256811053s

• [SLOW TEST:6.675 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:36:14.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:36:14.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-djmst" to be "success or failure"
Jan 26 11:36:14.418: INFO: Pod "downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.7999ms
Jan 26 11:36:16.971: INFO: Pod "downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.561331594s
Jan 26 11:36:19.000: INFO: Pod "downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.590779426s
Jan 26 11:36:21.950: INFO: Pod "downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.540663357s
Jan 26 11:36:23.968: INFO: Pod "downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.55820958s
Jan 26 11:36:25.988: INFO: Pod "downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.578518163s
STEP: Saw pod success
Jan 26 11:36:25.988: INFO: Pod "downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:36:25.998: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:36:26.424: INFO: Waiting for pod downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005 to disappear
Jan 26 11:36:26.432: INFO: Pod downwardapi-volume-0f169e7d-4030-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:36:26.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-djmst" for this suite.
Jan 26 11:36:32.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:36:32.619: INFO: namespace: e2e-tests-projected-djmst, resource: bindings, ignored listing per whitelist
Jan 26 11:36:32.689: INFO: namespace e2e-tests-projected-djmst deletion completed in 6.182857146s

• [SLOW TEST:18.419 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:36:32.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:36:33.005: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 26 11:36:33.024: INFO: Number of nodes with available pods: 0
Jan 26 11:36:33.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:34.050: INFO: Number of nodes with available pods: 0
Jan 26 11:36:34.050: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:35.501: INFO: Number of nodes with available pods: 0
Jan 26 11:36:35.501: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:36.050: INFO: Number of nodes with available pods: 0
Jan 26 11:36:36.050: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:37.047: INFO: Number of nodes with available pods: 0
Jan 26 11:36:37.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:38.410: INFO: Number of nodes with available pods: 0
Jan 26 11:36:38.410: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:39.175: INFO: Number of nodes with available pods: 0
Jan 26 11:36:39.175: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:40.924: INFO: Number of nodes with available pods: 0
Jan 26 11:36:40.924: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:41.158: INFO: Number of nodes with available pods: 0
Jan 26 11:36:41.158: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:42.058: INFO: Number of nodes with available pods: 0
Jan 26 11:36:42.058: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:43.063: INFO: Number of nodes with available pods: 1
Jan 26 11:36:43.063: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 26 11:36:43.212: INFO: Wrong image for pod: daemon-set-h49cp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 26 11:36:44.237: INFO: Wrong image for pod: daemon-set-h49cp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 26 11:36:45.602: INFO: Wrong image for pod: daemon-set-h49cp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 26 11:36:46.236: INFO: Wrong image for pod: daemon-set-h49cp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 26 11:36:48.134: INFO: Wrong image for pod: daemon-set-h49cp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 26 11:36:48.582: INFO: Wrong image for pod: daemon-set-h49cp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 26 11:36:49.350: INFO: Wrong image for pod: daemon-set-h49cp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 26 11:36:50.330: INFO: Wrong image for pod: daemon-set-h49cp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 26 11:36:50.331: INFO: Pod daemon-set-h49cp is not available
Jan 26 11:36:51.262: INFO: Pod daemon-set-5ph96 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 26 11:36:51.330: INFO: Number of nodes with available pods: 0
Jan 26 11:36:51.330: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:52.543: INFO: Number of nodes with available pods: 0
Jan 26 11:36:52.543: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:53.373: INFO: Number of nodes with available pods: 0
Jan 26 11:36:53.373: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:54.369: INFO: Number of nodes with available pods: 0
Jan 26 11:36:54.369: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:55.348: INFO: Number of nodes with available pods: 0
Jan 26 11:36:55.348: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:56.877: INFO: Number of nodes with available pods: 0
Jan 26 11:36:56.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:57.356: INFO: Number of nodes with available pods: 0
Jan 26 11:36:57.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:58.367: INFO: Number of nodes with available pods: 0
Jan 26 11:36:58.367: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:36:59.370: INFO: Number of nodes with available pods: 0
Jan 26 11:36:59.370: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 11:37:00.359: INFO: Number of nodes with available pods: 1
Jan 26 11:37:00.360: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gwcc6, will wait for the garbage collector to delete the pods
Jan 26 11:37:00.463: INFO: Deleting DaemonSet.extensions daemon-set took: 14.938831ms
Jan 26 11:37:00.763: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.573052ms
Jan 26 11:37:14.083: INFO: Number of nodes with available pods: 0
Jan 26 11:37:14.083: INFO: Number of running nodes: 0, number of available pods: 0
Jan 26 11:37:14.090: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gwcc6/daemonsets","resourceVersion":"19516477"},"items":null}

Jan 26 11:37:14.093: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gwcc6/pods","resourceVersion":"19516477"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:37:14.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gwcc6" for this suite.
Jan 26 11:37:22.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:37:22.334: INFO: namespace: e2e-tests-daemonsets-gwcc6, resource: bindings, ignored listing per whitelist
Jan 26 11:37:22.409: INFO: namespace e2e-tests-daemonsets-gwcc6 deletion completed in 8.301830192s

• [SLOW TEST:49.720 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:37:22.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 26 11:37:22.932: INFO: Waiting up to 5m0s for pod "pod-37decad1-4030-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-gqvbc" to be "success or failure"
Jan 26 11:37:22.995: INFO: Pod "pod-37decad1-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.731595ms
Jan 26 11:37:25.012: INFO: Pod "pod-37decad1-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078763819s
Jan 26 11:37:27.023: INFO: Pod "pod-37decad1-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090028246s
Jan 26 11:37:29.039: INFO: Pod "pod-37decad1-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106074168s
Jan 26 11:37:31.533: INFO: Pod "pod-37decad1-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.600378856s
Jan 26 11:37:33.549: INFO: Pod "pod-37decad1-4030-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.615799966s
STEP: Saw pod success
Jan 26 11:37:33.549: INFO: Pod "pod-37decad1-4030-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:37:33.557: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-37decad1-4030-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:37:34.062: INFO: Waiting for pod pod-37decad1-4030-11ea-b664-0242ac110005 to disappear
Jan 26 11:37:34.086: INFO: Pod pod-37decad1-4030-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:37:34.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gqvbc" for this suite.
Jan 26 11:37:40.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:37:40.332: INFO: namespace: e2e-tests-emptydir-gqvbc, resource: bindings, ignored listing per whitelist
Jan 26 11:37:40.399: INFO: namespace e2e-tests-emptydir-gqvbc deletion completed in 6.203681592s

• [SLOW TEST:17.990 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:37:40.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-42a3ed15-4030-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 11:37:40.925: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-4sddv" to be "success or failure"
Jan 26 11:37:40.986: INFO: Pod "pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 60.455669ms
Jan 26 11:37:42.999: INFO: Pod "pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073844986s
Jan 26 11:37:45.013: INFO: Pod "pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087995729s
Jan 26 11:37:47.029: INFO: Pod "pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103541843s
Jan 26 11:37:49.071: INFO: Pod "pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146060564s
Jan 26 11:37:51.102: INFO: Pod "pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176752795s
STEP: Saw pod success
Jan 26 11:37:51.102: INFO: Pod "pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:37:51.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 11:37:51.367: INFO: Waiting for pod pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005 to disappear
Jan 26 11:37:51.416: INFO: Pod pod-projected-configmaps-42a505ea-4030-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:37:51.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4sddv" for this suite.
Jan 26 11:37:59.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:37:59.796: INFO: namespace: e2e-tests-projected-4sddv, resource: bindings, ignored listing per whitelist
Jan 26 11:37:59.845: INFO: namespace e2e-tests-projected-4sddv deletion completed in 8.409822513s

• [SLOW TEST:19.446 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:37:59.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 26 11:38:00.172: INFO: Waiting up to 5m0s for pod "pod-4e20e98f-4030-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-wzkkn" to be "success or failure"
Jan 26 11:38:00.180: INFO: Pod "pod-4e20e98f-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056402ms
Jan 26 11:38:02.206: INFO: Pod "pod-4e20e98f-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033437864s
Jan 26 11:38:04.221: INFO: Pod "pod-4e20e98f-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048341526s
Jan 26 11:38:06.600: INFO: Pod "pod-4e20e98f-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42794387s
Jan 26 11:38:08.618: INFO: Pod "pod-4e20e98f-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446179148s
Jan 26 11:38:10.630: INFO: Pod "pod-4e20e98f-4030-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.457810506s
STEP: Saw pod success
Jan 26 11:38:10.630: INFO: Pod "pod-4e20e98f-4030-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:38:10.634: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4e20e98f-4030-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:38:11.724: INFO: Waiting for pod pod-4e20e98f-4030-11ea-b664-0242ac110005 to disappear
Jan 26 11:38:12.209: INFO: Pod pod-4e20e98f-4030-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:38:12.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wzkkn" for this suite.
Jan 26 11:38:18.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:38:18.450: INFO: namespace: e2e-tests-emptydir-wzkkn, resource: bindings, ignored listing per whitelist
Jan 26 11:38:18.620: INFO: namespace e2e-tests-emptydir-wzkkn deletion completed in 6.387744053s

• [SLOW TEST:18.774 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:38:18.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:38:19.011: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5954210a-4030-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001cf441a), BlockOwnerDeletion:(*bool)(0xc001cf441b)}}
Jan 26 11:38:19.140: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"59405677-4030-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0022ac8b2), BlockOwnerDeletion:(*bool)(0xc0022ac8b3)}}
Jan 26 11:38:19.155: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"594523c6-4030-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0022aca82), BlockOwnerDeletion:(*bool)(0xc0022aca83)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:38:24.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-w6rrq" for this suite.
Jan 26 11:38:30.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:38:30.421: INFO: namespace: e2e-tests-gc-w6rrq, resource: bindings, ignored listing per whitelist
Jan 26 11:38:30.496: INFO: namespace e2e-tests-gc-w6rrq deletion completed in 6.260060964s

• [SLOW TEST:11.877 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:38:30.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-psvn2
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-psvn2
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-psvn2
Jan 26 11:38:30.919: INFO: Found 0 stateful pods, waiting for 1
Jan 26 11:38:40.980: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 26 11:38:41.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-psvn2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 11:38:41.901: INFO: stderr: "I0126 11:38:41.540984    1060 log.go:172] (0xc0006f40b0) (0xc000718780) Create stream\nI0126 11:38:41.541058    1060 log.go:172] (0xc0006f40b0) (0xc000718780) Stream added, broadcasting: 1\nI0126 11:38:41.548816    1060 log.go:172] (0xc0006f40b0) Reply frame received for 1\nI0126 11:38:41.548860    1060 log.go:172] (0xc0006f40b0) (0xc0003745a0) Create stream\nI0126 11:38:41.548875    1060 log.go:172] (0xc0006f40b0) (0xc0003745a0) Stream added, broadcasting: 3\nI0126 11:38:41.551094    1060 log.go:172] (0xc0006f40b0) Reply frame received for 3\nI0126 11:38:41.551163    1060 log.go:172] (0xc0006f40b0) (0xc000688dc0) Create stream\nI0126 11:38:41.551194    1060 log.go:172] (0xc0006f40b0) (0xc000688dc0) Stream added, broadcasting: 5\nI0126 11:38:41.554093    1060 log.go:172] (0xc0006f40b0) Reply frame received for 5\nI0126 11:38:41.750009    1060 log.go:172] (0xc0006f40b0) Data frame received for 3\nI0126 11:38:41.750093    1060 log.go:172] (0xc0003745a0) (3) Data frame handling\nI0126 11:38:41.750126    1060 log.go:172] (0xc0003745a0) (3) Data frame sent\nI0126 11:38:41.893415    1060 log.go:172] (0xc0006f40b0) (0xc0003745a0) Stream removed, broadcasting: 3\nI0126 11:38:41.893547    1060 log.go:172] (0xc0006f40b0) Data frame received for 1\nI0126 11:38:41.893567    1060 log.go:172] (0xc000718780) (1) Data frame handling\nI0126 11:38:41.893579    1060 log.go:172] (0xc000718780) (1) Data frame sent\nI0126 11:38:41.893686    1060 log.go:172] (0xc0006f40b0) (0xc000718780) Stream removed, broadcasting: 1\nI0126 11:38:41.893785    1060 log.go:172] (0xc0006f40b0) (0xc000688dc0) Stream removed, broadcasting: 5\nI0126 11:38:41.893829    1060 log.go:172] (0xc0006f40b0) Go away received\nI0126 11:38:41.894059    1060 log.go:172] (0xc0006f40b0) (0xc000718780) Stream removed, broadcasting: 1\nI0126 11:38:41.894087    1060 log.go:172] (0xc0006f40b0) (0xc0003745a0) Stream removed, broadcasting: 3\nI0126 11:38:41.894095    1060 log.go:172] (0xc0006f40b0) (0xc000688dc0) Stream removed, broadcasting: 5\n"
Jan 26 11:38:41.901: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 11:38:41.901: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 11:38:41.915: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 26 11:38:51.932: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 11:38:51.932: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 11:38:52.097: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999469s
Jan 26 11:38:53.126: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.860668231s
Jan 26 11:38:54.172: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.831947093s
Jan 26 11:38:55.189: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.786391036s
Jan 26 11:38:56.212: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.76834656s
Jan 26 11:38:57.255: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.7460348s
Jan 26 11:38:58.273: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.703231065s
Jan 26 11:38:59.362: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.685398204s
Jan 26 11:39:00.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.595991795s
Jan 26 11:39:01.413: INFO: Verifying statefulset ss doesn't scale past 1 for another 575.790055ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-psvn2
Jan 26 11:39:02.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-psvn2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 11:39:03.981: INFO: stderr: "I0126 11:39:03.252188    1081 log.go:172] (0xc0001386e0) (0xc000714640) Create stream\nI0126 11:39:03.252355    1081 log.go:172] (0xc0001386e0) (0xc000714640) Stream added, broadcasting: 1\nI0126 11:39:03.260906    1081 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0126 11:39:03.260987    1081 log.go:172] (0xc0001386e0) (0xc0005e0d20) Create stream\nI0126 11:39:03.261002    1081 log.go:172] (0xc0001386e0) (0xc0005e0d20) Stream added, broadcasting: 3\nI0126 11:39:03.263427    1081 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0126 11:39:03.263461    1081 log.go:172] (0xc0001386e0) (0xc0007146e0) Create stream\nI0126 11:39:03.263470    1081 log.go:172] (0xc0001386e0) (0xc0007146e0) Stream added, broadcasting: 5\nI0126 11:39:03.265434    1081 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0126 11:39:03.441823    1081 log.go:172] (0xc0001386e0) Data frame received for 3\nI0126 11:39:03.442125    1081 log.go:172] (0xc0005e0d20) (3) Data frame handling\nI0126 11:39:03.442229    1081 log.go:172] (0xc0005e0d20) (3) Data frame sent\nI0126 11:39:03.971052    1081 log.go:172] (0xc0001386e0) (0xc0005e0d20) Stream removed, broadcasting: 3\nI0126 11:39:03.971211    1081 log.go:172] (0xc0001386e0) Data frame received for 1\nI0126 11:39:03.971286    1081 log.go:172] (0xc0001386e0) (0xc0007146e0) Stream removed, broadcasting: 5\nI0126 11:39:03.971415    1081 log.go:172] (0xc000714640) (1) Data frame handling\nI0126 11:39:03.971450    1081 log.go:172] (0xc000714640) (1) Data frame sent\nI0126 11:39:03.971476    1081 log.go:172] (0xc0001386e0) (0xc000714640) Stream removed, broadcasting: 1\nI0126 11:39:03.971491    1081 log.go:172] (0xc0001386e0) Go away received\nI0126 11:39:03.971751    1081 log.go:172] (0xc0001386e0) (0xc000714640) Stream removed, broadcasting: 1\nI0126 11:39:03.971774    1081 log.go:172] (0xc0001386e0) (0xc0005e0d20) Stream removed, broadcasting: 3\nI0126 11:39:03.971788    1081 log.go:172] (0xc0001386e0) (0xc0007146e0) Stream removed, broadcasting: 5\n"
Jan 26 11:39:03.981: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 11:39:03.981: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 11:39:04.572: INFO: Found 2 stateful pods, waiting for 3
Jan 26 11:39:14.618: INFO: Found 2 stateful pods, waiting for 3
Jan 26 11:39:24.615: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:39:24.615: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 11:39:24.615: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 26 11:39:24.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-psvn2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 11:39:25.254: INFO: stderr: "I0126 11:39:24.849333    1103 log.go:172] (0xc000738370) (0xc000695360) Create stream\nI0126 11:39:24.849572    1103 log.go:172] (0xc000738370) (0xc000695360) Stream added, broadcasting: 1\nI0126 11:39:24.857119    1103 log.go:172] (0xc000738370) Reply frame received for 1\nI0126 11:39:24.857154    1103 log.go:172] (0xc000738370) (0xc00036a000) Create stream\nI0126 11:39:24.857173    1103 log.go:172] (0xc000738370) (0xc00036a000) Stream added, broadcasting: 3\nI0126 11:39:24.858013    1103 log.go:172] (0xc000738370) Reply frame received for 3\nI0126 11:39:24.858038    1103 log.go:172] (0xc000738370) (0xc000372000) Create stream\nI0126 11:39:24.858046    1103 log.go:172] (0xc000738370) (0xc000372000) Stream added, broadcasting: 5\nI0126 11:39:24.858801    1103 log.go:172] (0xc000738370) Reply frame received for 5\nI0126 11:39:25.115489    1103 log.go:172] (0xc000738370) Data frame received for 3\nI0126 11:39:25.115551    1103 log.go:172] (0xc00036a000) (3) Data frame handling\nI0126 11:39:25.115560    1103 log.go:172] (0xc00036a000) (3) Data frame sent\nI0126 11:39:25.245269    1103 log.go:172] (0xc000738370) (0xc00036a000) Stream removed, broadcasting: 3\nI0126 11:39:25.245435    1103 log.go:172] (0xc000738370) Data frame received for 1\nI0126 11:39:25.245456    1103 log.go:172] (0xc000695360) (1) Data frame handling\nI0126 11:39:25.245476    1103 log.go:172] (0xc000695360) (1) Data frame sent\nI0126 11:39:25.245491    1103 log.go:172] (0xc000738370) (0xc000695360) Stream removed, broadcasting: 1\nI0126 11:39:25.245731    1103 log.go:172] (0xc000738370) (0xc000372000) Stream removed, broadcasting: 5\nI0126 11:39:25.245777    1103 log.go:172] (0xc000738370) (0xc000695360) Stream removed, broadcasting: 1\nI0126 11:39:25.245788    1103 log.go:172] (0xc000738370) (0xc00036a000) Stream removed, broadcasting: 3\nI0126 11:39:25.245794    1103 log.go:172] (0xc000738370) (0xc000372000) Stream removed, broadcasting: 5\n"
Jan 26 11:39:25.254: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 11:39:25.254: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 11:39:25.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-psvn2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 11:39:25.819: INFO: stderr: "I0126 11:39:25.465849    1125 log.go:172] (0xc00014c580) (0xc000377360) Create stream\nI0126 11:39:25.466304    1125 log.go:172] (0xc00014c580) (0xc000377360) Stream added, broadcasting: 1\nI0126 11:39:25.472646    1125 log.go:172] (0xc00014c580) Reply frame received for 1\nI0126 11:39:25.472757    1125 log.go:172] (0xc00014c580) (0xc0001c86e0) Create stream\nI0126 11:39:25.472783    1125 log.go:172] (0xc00014c580) (0xc0001c86e0) Stream added, broadcasting: 3\nI0126 11:39:25.474064    1125 log.go:172] (0xc00014c580) Reply frame received for 3\nI0126 11:39:25.474116    1125 log.go:172] (0xc00014c580) (0xc000377400) Create stream\nI0126 11:39:25.474128    1125 log.go:172] (0xc00014c580) (0xc000377400) Stream added, broadcasting: 5\nI0126 11:39:25.476698    1125 log.go:172] (0xc00014c580) Reply frame received for 5\nI0126 11:39:25.679404    1125 log.go:172] (0xc00014c580) Data frame received for 3\nI0126 11:39:25.679458    1125 log.go:172] (0xc0001c86e0) (3) Data frame handling\nI0126 11:39:25.679477    1125 log.go:172] (0xc0001c86e0) (3) Data frame sent\nI0126 11:39:25.807091    1125 log.go:172] (0xc00014c580) (0xc0001c86e0) Stream removed, broadcasting: 3\nI0126 11:39:25.807303    1125 log.go:172] (0xc00014c580) Data frame received for 1\nI0126 11:39:25.807335    1125 log.go:172] (0xc000377360) (1) Data frame handling\nI0126 11:39:25.807649    1125 log.go:172] (0xc000377360) (1) Data frame sent\nI0126 11:39:25.807732    1125 log.go:172] (0xc00014c580) (0xc000377400) Stream removed, broadcasting: 5\nI0126 11:39:25.807848    1125 log.go:172] (0xc00014c580) (0xc000377360) Stream removed, broadcasting: 1\nI0126 11:39:25.808216    1125 log.go:172] (0xc00014c580) Go away received\nI0126 11:39:25.808620    1125 log.go:172] (0xc00014c580) (0xc000377360) Stream removed, broadcasting: 1\nI0126 11:39:25.808666    1125 log.go:172] (0xc00014c580) (0xc0001c86e0) Stream removed, broadcasting: 3\nI0126 11:39:25.808678    1125 log.go:172] (0xc00014c580) (0xc000377400) Stream removed, broadcasting: 5\n"
Jan 26 11:39:25.819: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 11:39:25.819: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 11:39:25.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-psvn2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 11:39:26.252: INFO: stderr: "I0126 11:39:25.980962    1146 log.go:172] (0xc000700370) (0xc000720640) Create stream\nI0126 11:39:25.981060    1146 log.go:172] (0xc000700370) (0xc000720640) Stream added, broadcasting: 1\nI0126 11:39:25.984894    1146 log.go:172] (0xc000700370) Reply frame received for 1\nI0126 11:39:25.984919    1146 log.go:172] (0xc000700370) (0xc0005bcc80) Create stream\nI0126 11:39:25.984928    1146 log.go:172] (0xc000700370) (0xc0005bcc80) Stream added, broadcasting: 3\nI0126 11:39:25.985990    1146 log.go:172] (0xc000700370) Reply frame received for 3\nI0126 11:39:25.986009    1146 log.go:172] (0xc000700370) (0xc0005bcdc0) Create stream\nI0126 11:39:25.986017    1146 log.go:172] (0xc000700370) (0xc0005bcdc0) Stream added, broadcasting: 5\nI0126 11:39:25.987661    1146 log.go:172] (0xc000700370) Reply frame received for 5\nI0126 11:39:26.108591    1146 log.go:172] (0xc000700370) Data frame received for 3\nI0126 11:39:26.108676    1146 log.go:172] (0xc0005bcc80) (3) Data frame handling\nI0126 11:39:26.108713    1146 log.go:172] (0xc0005bcc80) (3) Data frame sent\nI0126 11:39:26.244436    1146 log.go:172] (0xc000700370) (0xc0005bcc80) Stream removed, broadcasting: 3\nI0126 11:39:26.244897    1146 log.go:172] (0xc000700370) (0xc0005bcdc0) Stream removed, broadcasting: 5\nI0126 11:39:26.244960    1146 log.go:172] (0xc000700370) Data frame received for 1\nI0126 11:39:26.245062    1146 log.go:172] (0xc000720640) (1) Data frame handling\nI0126 11:39:26.245093    1146 log.go:172] (0xc000720640) (1) Data frame sent\nI0126 11:39:26.245102    1146 log.go:172] (0xc000700370) (0xc000720640) Stream removed, broadcasting: 1\nI0126 11:39:26.245126    1146 log.go:172] (0xc000700370) Go away received\nI0126 11:39:26.245496    1146 log.go:172] (0xc000700370) (0xc000720640) Stream removed, broadcasting: 1\nI0126 11:39:26.245547    1146 log.go:172] (0xc000700370) (0xc0005bcc80) Stream removed, broadcasting: 3\nI0126 11:39:26.245570    1146 log.go:172] (0xc000700370) (0xc0005bcdc0) Stream removed, broadcasting: 5\n"
Jan 26 11:39:26.253: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 11:39:26.253: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 11:39:26.253: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 11:39:26.264: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 26 11:39:36.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 11:39:36.289: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 11:39:36.289: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 11:39:36.314: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999599s
Jan 26 11:39:37.332: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989226447s
Jan 26 11:39:38.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970681988s
Jan 26 11:39:39.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.933976508s
Jan 26 11:39:40.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.908616945s
Jan 26 11:39:41.436: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.888136773s
Jan 26 11:39:42.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.867468202s
Jan 26 11:39:43.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.846834582s
Jan 26 11:39:44.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.822758557s
Jan 26 11:39:45.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 788.681121ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-psvn2
Jan 26 11:39:46.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-psvn2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 11:39:47.182: INFO: stderr: "I0126 11:39:46.830298    1168 log.go:172] (0xc0005da2c0) (0xc00079f4a0) Create stream\nI0126 11:39:46.830950    1168 log.go:172] (0xc0005da2c0) (0xc00079f4a0) Stream added, broadcasting: 1\nI0126 11:39:46.837952    1168 log.go:172] (0xc0005da2c0) Reply frame received for 1\nI0126 11:39:46.838037    1168 log.go:172] (0xc0005da2c0) (0xc000204000) Create stream\nI0126 11:39:46.838055    1168 log.go:172] (0xc0005da2c0) (0xc000204000) Stream added, broadcasting: 3\nI0126 11:39:46.839460    1168 log.go:172] (0xc0005da2c0) Reply frame received for 3\nI0126 11:39:46.839524    1168 log.go:172] (0xc0005da2c0) (0xc0007ae000) Create stream\nI0126 11:39:46.839538    1168 log.go:172] (0xc0005da2c0) (0xc0007ae000) Stream added, broadcasting: 5\nI0126 11:39:46.840785    1168 log.go:172] (0xc0005da2c0) Reply frame received for 5\nI0126 11:39:46.990399    1168 log.go:172] (0xc0005da2c0) Data frame received for 3\nI0126 11:39:46.990499    1168 log.go:172] (0xc000204000) (3) Data frame handling\nI0126 11:39:46.990593    1168 log.go:172] (0xc000204000) (3) Data frame sent\nI0126 11:39:47.173520    1168 log.go:172] (0xc0005da2c0) Data frame received for 1\nI0126 11:39:47.173983    1168 log.go:172] (0xc0005da2c0) (0xc000204000) Stream removed, broadcasting: 3\nI0126 11:39:47.174048    1168 log.go:172] (0xc00079f4a0) (1) Data frame handling\nI0126 11:39:47.174120    1168 log.go:172] (0xc00079f4a0) (1) Data frame sent\nI0126 11:39:47.174236    1168 log.go:172] (0xc0005da2c0) (0xc0007ae000) Stream removed, broadcasting: 5\nI0126 11:39:47.174504    1168 log.go:172] (0xc0005da2c0) (0xc00079f4a0) Stream removed, broadcasting: 1\nI0126 11:39:47.174594    1168 log.go:172] (0xc0005da2c0) Go away received\nI0126 11:39:47.175042    1168 log.go:172] (0xc0005da2c0) (0xc00079f4a0) Stream removed, broadcasting: 1\nI0126 11:39:47.175065    1168 log.go:172] (0xc0005da2c0) (0xc000204000) Stream removed, broadcasting: 3\nI0126 11:39:47.175073    1168 log.go:172] (0xc0005da2c0) (0xc0007ae000) Stream removed, broadcasting: 5\n"
Jan 26 11:39:47.183: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 11:39:47.183: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 11:39:47.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-psvn2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 11:39:47.763: INFO: stderr: "I0126 11:39:47.411679    1189 log.go:172] (0xc0008942c0) (0xc000609400) Create stream\nI0126 11:39:47.412395    1189 log.go:172] (0xc0008942c0) (0xc000609400) Stream added, broadcasting: 1\nI0126 11:39:47.418582    1189 log.go:172] (0xc0008942c0) Reply frame received for 1\nI0126 11:39:47.418665    1189 log.go:172] (0xc0008942c0) (0xc000734000) Create stream\nI0126 11:39:47.418683    1189 log.go:172] (0xc0008942c0) (0xc000734000) Stream added, broadcasting: 3\nI0126 11:39:47.421052    1189 log.go:172] (0xc0008942c0) Reply frame received for 3\nI0126 11:39:47.421105    1189 log.go:172] (0xc0008942c0) (0xc0003ea000) Create stream\nI0126 11:39:47.421120    1189 log.go:172] (0xc0008942c0) (0xc0003ea000) Stream added, broadcasting: 5\nI0126 11:39:47.423962    1189 log.go:172] (0xc0008942c0) Reply frame received for 5\nI0126 11:39:47.632147    1189 log.go:172] (0xc0008942c0) Data frame received for 3\nI0126 11:39:47.632196    1189 log.go:172] (0xc000734000) (3) Data frame handling\nI0126 11:39:47.632224    1189 log.go:172] (0xc000734000) (3) Data frame sent\nI0126 11:39:47.749155    1189 log.go:172] (0xc0008942c0) Data frame received for 1\nI0126 11:39:47.749253    1189 log.go:172] (0xc0008942c0) (0xc000734000) Stream removed, broadcasting: 3\nI0126 11:39:47.749319    1189 log.go:172] (0xc000609400) (1) Data frame handling\nI0126 11:39:47.749353    1189 log.go:172] (0xc000609400) (1) Data frame sent\nI0126 11:39:47.749377    1189 log.go:172] (0xc0008942c0) (0xc000609400) Stream removed, broadcasting: 1\nI0126 11:39:47.749602    1189 log.go:172] (0xc0008942c0) (0xc0003ea000) Stream removed, broadcasting: 5\nI0126 11:39:47.749778    1189 log.go:172] (0xc0008942c0) (0xc000609400) Stream removed, broadcasting: 1\nI0126 11:39:47.749803    1189 log.go:172] (0xc0008942c0) (0xc000734000) Stream removed, broadcasting: 3\nI0126 11:39:47.749820    1189 log.go:172] (0xc0008942c0) (0xc0003ea000) Stream removed, broadcasting: 5\n"
Jan 26 11:39:47.763: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 11:39:47.763: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 11:39:47.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-psvn2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 11:39:48.376: INFO: stderr: "I0126 11:39:48.065173    1211 log.go:172] (0xc00066e2c0) (0xc0006d6780) Create stream\nI0126 11:39:48.065328    1211 log.go:172] (0xc00066e2c0) (0xc0006d6780) Stream added, broadcasting: 1\nI0126 11:39:48.071531    1211 log.go:172] (0xc00066e2c0) Reply frame received for 1\nI0126 11:39:48.071564    1211 log.go:172] (0xc00066e2c0) (0xc0006d6820) Create stream\nI0126 11:39:48.071575    1211 log.go:172] (0xc00066e2c0) (0xc0006d6820) Stream added, broadcasting: 3\nI0126 11:39:48.072604    1211 log.go:172] (0xc00066e2c0) Reply frame received for 3\nI0126 11:39:48.072630    1211 log.go:172] (0xc00066e2c0) (0xc00062aaa0) Create stream\nI0126 11:39:48.072641    1211 log.go:172] (0xc00066e2c0) (0xc00062aaa0) Stream added, broadcasting: 5\nI0126 11:39:48.073672    1211 log.go:172] (0xc00066e2c0) Reply frame received for 5\nI0126 11:39:48.219585    1211 log.go:172] (0xc00066e2c0) Data frame received for 3\nI0126 11:39:48.219654    1211 log.go:172] (0xc0006d6820) (3) Data frame handling\nI0126 11:39:48.219678    1211 log.go:172] (0xc0006d6820) (3) Data frame sent\nI0126 11:39:48.362400    1211 log.go:172] (0xc00066e2c0) (0xc0006d6820) Stream removed, broadcasting: 3\nI0126 11:39:48.362809    1211 log.go:172] (0xc00066e2c0) Data frame received for 1\nI0126 11:39:48.362945    1211 log.go:172] (0xc0006d6780) (1) Data frame handling\nI0126 11:39:48.363032    1211 log.go:172] (0xc0006d6780) (1) Data frame sent\nI0126 11:39:48.363131    1211 log.go:172] (0xc00066e2c0) (0xc00062aaa0) Stream removed, broadcasting: 5\nI0126 11:39:48.363253    1211 log.go:172] (0xc00066e2c0) (0xc0006d6780) Stream removed, broadcasting: 1\nI0126 11:39:48.363373    1211 log.go:172] (0xc00066e2c0) Go away received\nI0126 11:39:48.364083    1211 log.go:172] (0xc00066e2c0) (0xc0006d6780) Stream removed, broadcasting: 1\nI0126 11:39:48.364133    1211 log.go:172] (0xc00066e2c0) (0xc0006d6820) Stream removed, broadcasting: 3\nI0126 11:39:48.364149    1211 log.go:172] (0xc00066e2c0) (0xc00062aaa0) Stream removed, broadcasting: 5\n"
Jan 26 11:39:48.376: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 11:39:48.376: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 11:39:48.376: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 26 11:40:08.422: INFO: Deleting all statefulset in ns e2e-tests-statefulset-psvn2
Jan 26 11:40:08.431: INFO: Scaling statefulset ss to 0
Jan 26 11:40:08.484: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 11:40:08.495: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:40:08.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-psvn2" for this suite.
Jan 26 11:40:16.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:40:16.745: INFO: namespace: e2e-tests-statefulset-psvn2, resource: bindings, ignored listing per whitelist
Jan 26 11:40:16.767: INFO: namespace e2e-tests-statefulset-psvn2 deletion completed in 8.173473287s

• [SLOW TEST:106.270 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:40:16.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-9fa96c8e-4030-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 11:40:16.985: INFO: Waiting up to 5m0s for pod "pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-7jlqv" to be "success or failure"
Jan 26 11:40:16.993: INFO: Pod "pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.431749ms
Jan 26 11:40:19.004: INFO: Pod "pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01928283s
Jan 26 11:40:21.019: INFO: Pod "pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03395545s
Jan 26 11:40:23.038: INFO: Pod "pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05353105s
Jan 26 11:40:25.070: INFO: Pod "pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085126267s
Jan 26 11:40:27.088: INFO: Pod "pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103526188s
STEP: Saw pod success
Jan 26 11:40:27.088: INFO: Pod "pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:40:27.093: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 26 11:40:27.714: INFO: Waiting for pod pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005 to disappear
Jan 26 11:40:28.221: INFO: Pod pod-configmaps-9faaac1c-4030-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:40:28.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7jlqv" for this suite.
Jan 26 11:40:34.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:40:34.537: INFO: namespace: e2e-tests-configmap-7jlqv, resource: bindings, ignored listing per whitelist
Jan 26 11:40:34.704: INFO: namespace e2e-tests-configmap-7jlqv deletion completed in 6.470593997s

• [SLOW TEST:17.936 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:40:34.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rhhnt
Jan 26 11:40:45.080: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rhhnt
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 11:40:45.084: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:44:45.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rhhnt" for this suite.
Jan 26 11:44:51.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:44:51.641: INFO: namespace: e2e-tests-container-probe-rhhnt, resource: bindings, ignored listing per whitelist
Jan 26 11:44:51.656: INFO: namespace e2e-tests-container-probe-rhhnt deletion completed in 6.455171761s

• [SLOW TEST:256.952 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:44:51.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 26 11:44:51.801: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 26 11:44:51.889: INFO: Waiting for terminating namespaces to be deleted...
Jan 26 11:44:51.893: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 26 11:44:51.920: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 26 11:44:51.920: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 11:44:51.920: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 11:44:51.920: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 26 11:44:51.920: INFO: 	Container weave ready: true, restart count 0
Jan 26 11:44:51.920: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 11:44:51.920: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 26 11:44:51.920: INFO: 	Container coredns ready: true, restart count 0
Jan 26 11:44:51.920: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 11:44:51.920: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 11:44:51.920: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 11:44:51.920: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 26 11:44:51.920: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ed6d2ec87345e3], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:44:53.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-9bkqr" for this suite.
Jan 26 11:44:59.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:44:59.214: INFO: namespace: e2e-tests-sched-pred-9bkqr, resource: bindings, ignored listing per whitelist
Jan 26 11:44:59.395: INFO: namespace e2e-tests-sched-pred-9bkqr deletion completed in 6.335833016s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.739 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:44:59.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:44:59.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:45:12.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5skjr" for this suite.
Jan 26 11:45:54.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:45:54.447: INFO: namespace: e2e-tests-pods-5skjr, resource: bindings, ignored listing per whitelist
Jan 26 11:45:54.447: INFO: namespace e2e-tests-pods-5skjr deletion completed in 42.278418791s

• [SLOW TEST:55.052 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:45:54.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0126 11:46:26.177306       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 11:46:26.177: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:46:26.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-rpvdf" for this suite.
Jan 26 11:46:34.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:46:34.403: INFO: namespace: e2e-tests-gc-rpvdf, resource: bindings, ignored listing per whitelist
Jan 26 11:46:34.421: INFO: namespace e2e-tests-gc-rpvdf deletion completed in 8.238649483s

• [SLOW TEST:39.973 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:46:34.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:46:46.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-svdjh" for this suite.
Jan 26 11:46:53.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:46:53.190: INFO: namespace: e2e-tests-emptydir-wrapper-svdjh, resource: bindings, ignored listing per whitelist
Jan 26 11:46:53.259: INFO: namespace e2e-tests-emptydir-wrapper-svdjh deletion completed in 6.234575439s

• [SLOW TEST:18.838 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:46:53.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 26 11:46:53.570: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wt8nt,SelfLink:/api/v1/namespaces/e2e-tests-watch-wt8nt/configmaps/e2e-watch-test-label-changed,UID:8c0ae6fd-4031-11ea-a994-fa163e34d433,ResourceVersion:19517662,Generation:0,CreationTimestamp:2020-01-26 11:46:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 26 11:46:53.570: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wt8nt,SelfLink:/api/v1/namespaces/e2e-tests-watch-wt8nt/configmaps/e2e-watch-test-label-changed,UID:8c0ae6fd-4031-11ea-a994-fa163e34d433,ResourceVersion:19517663,Generation:0,CreationTimestamp:2020-01-26 11:46:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 26 11:46:53.570: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wt8nt,SelfLink:/api/v1/namespaces/e2e-tests-watch-wt8nt/configmaps/e2e-watch-test-label-changed,UID:8c0ae6fd-4031-11ea-a994-fa163e34d433,ResourceVersion:19517664,Generation:0,CreationTimestamp:2020-01-26 11:46:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 26 11:47:03.661: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wt8nt,SelfLink:/api/v1/namespaces/e2e-tests-watch-wt8nt/configmaps/e2e-watch-test-label-changed,UID:8c0ae6fd-4031-11ea-a994-fa163e34d433,ResourceVersion:19517678,Generation:0,CreationTimestamp:2020-01-26 11:46:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 11:47:03.662: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wt8nt,SelfLink:/api/v1/namespaces/e2e-tests-watch-wt8nt/configmaps/e2e-watch-test-label-changed,UID:8c0ae6fd-4031-11ea-a994-fa163e34d433,ResourceVersion:19517679,Generation:0,CreationTimestamp:2020-01-26 11:46:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 26 11:47:03.662: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wt8nt,SelfLink:/api/v1/namespaces/e2e-tests-watch-wt8nt/configmaps/e2e-watch-test-label-changed,UID:8c0ae6fd-4031-11ea-a994-fa163e34d433,ResourceVersion:19517680,Generation:0,CreationTimestamp:2020-01-26 11:46:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:47:03.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-wt8nt" for this suite.
Jan 26 11:47:11.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:47:11.927: INFO: namespace: e2e-tests-watch-wt8nt, resource: bindings, ignored listing per whitelist
Jan 26 11:47:11.978: INFO: namespace e2e-tests-watch-wt8nt deletion completed in 8.290127756s

• [SLOW TEST:18.718 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:47:11.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 26 11:47:12.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:14.193: INFO: stderr: ""
Jan 26 11:47:14.193: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 11:47:14.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:15.416: INFO: stderr: ""
Jan 26 11:47:15.416: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-sm74m "
Jan 26 11:47:15.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:15.594: INFO: stderr: ""
Jan 26 11:47:15.594: INFO: stdout: ""
Jan 26 11:47:15.595: INFO: update-demo-nautilus-cjslr is created but not running
Jan 26 11:47:20.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:20.776: INFO: stderr: ""
Jan 26 11:47:20.776: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-sm74m "
Jan 26 11:47:20.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:20.928: INFO: stderr: ""
Jan 26 11:47:20.928: INFO: stdout: ""
Jan 26 11:47:20.928: INFO: update-demo-nautilus-cjslr is created but not running
Jan 26 11:47:25.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:26.142: INFO: stderr: ""
Jan 26 11:47:26.142: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-sm74m "
Jan 26 11:47:26.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:26.283: INFO: stderr: ""
Jan 26 11:47:26.284: INFO: stdout: ""
Jan 26 11:47:26.284: INFO: update-demo-nautilus-cjslr is created but not running
Jan 26 11:47:31.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:31.411: INFO: stderr: ""
Jan 26 11:47:31.411: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-sm74m "
Jan 26 11:47:31.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:31.559: INFO: stderr: ""
Jan 26 11:47:31.559: INFO: stdout: "true"
Jan 26 11:47:31.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:31.671: INFO: stderr: ""
Jan 26 11:47:31.672: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:47:31.672: INFO: validating pod update-demo-nautilus-cjslr
Jan 26 11:47:31.684: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:47:31.684: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:47:31.684: INFO: update-demo-nautilus-cjslr is verified up and running
Jan 26 11:47:31.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sm74m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:31.778: INFO: stderr: ""
Jan 26 11:47:31.778: INFO: stdout: "true"
Jan 26 11:47:31.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sm74m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:31.934: INFO: stderr: ""
Jan 26 11:47:31.934: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:47:31.934: INFO: validating pod update-demo-nautilus-sm74m
Jan 26 11:47:31.947: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:47:31.947: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:47:31.947: INFO: update-demo-nautilus-sm74m is verified up and running
STEP: scaling down the replication controller
Jan 26 11:47:31.949: INFO: scanned /root for discovery docs: 
Jan 26 11:47:31.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:33.309: INFO: stderr: ""
Jan 26 11:47:33.309: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 11:47:33.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:33.559: INFO: stderr: ""
Jan 26 11:47:33.559: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-sm74m "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 26 11:47:38.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:38.686: INFO: stderr: ""
Jan 26 11:47:38.686: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-sm74m "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 26 11:47:43.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:43.847: INFO: stderr: ""
Jan 26 11:47:43.847: INFO: stdout: "update-demo-nautilus-cjslr "
Jan 26 11:47:43.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:44.030: INFO: stderr: ""
Jan 26 11:47:44.030: INFO: stdout: "true"
Jan 26 11:47:44.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:44.118: INFO: stderr: ""
Jan 26 11:47:44.118: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:47:44.118: INFO: validating pod update-demo-nautilus-cjslr
Jan 26 11:47:44.133: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:47:44.133: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:47:44.133: INFO: update-demo-nautilus-cjslr is verified up and running
STEP: scaling up the replication controller
Jan 26 11:47:44.135: INFO: scanned /root for discovery docs: 
Jan 26 11:47:44.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:45.308: INFO: stderr: ""
Jan 26 11:47:45.308: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 11:47:45.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:45.424: INFO: stderr: ""
Jan 26 11:47:45.424: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-wwx4w "
Jan 26 11:47:45.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:45.547: INFO: stderr: ""
Jan 26 11:47:45.547: INFO: stdout: "true"
Jan 26 11:47:45.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:45.927: INFO: stderr: ""
Jan 26 11:47:45.927: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:47:45.927: INFO: validating pod update-demo-nautilus-cjslr
Jan 26 11:47:45.942: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:47:45.942: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:47:45.942: INFO: update-demo-nautilus-cjslr is verified up and running
Jan 26 11:47:45.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwx4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:46.120: INFO: stderr: ""
Jan 26 11:47:46.120: INFO: stdout: ""
Jan 26 11:47:46.120: INFO: update-demo-nautilus-wwx4w is created but not running
Jan 26 11:47:51.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:51.239: INFO: stderr: ""
Jan 26 11:47:51.239: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-wwx4w "
Jan 26 11:47:51.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:51.364: INFO: stderr: ""
Jan 26 11:47:51.365: INFO: stdout: "true"
Jan 26 11:47:51.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:51.478: INFO: stderr: ""
Jan 26 11:47:51.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:47:51.478: INFO: validating pod update-demo-nautilus-cjslr
Jan 26 11:47:51.490: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:47:51.490: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:47:51.490: INFO: update-demo-nautilus-cjslr is verified up and running
Jan 26 11:47:51.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwx4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:51.593: INFO: stderr: ""
Jan 26 11:47:51.593: INFO: stdout: ""
Jan 26 11:47:51.593: INFO: update-demo-nautilus-wwx4w is created but not running
Jan 26 11:47:56.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:56.775: INFO: stderr: ""
Jan 26 11:47:56.775: INFO: stdout: "update-demo-nautilus-cjslr update-demo-nautilus-wwx4w "
Jan 26 11:47:56.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:56.880: INFO: stderr: ""
Jan 26 11:47:56.881: INFO: stdout: "true"
Jan 26 11:47:56.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cjslr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:57.084: INFO: stderr: ""
Jan 26 11:47:57.085: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:47:57.085: INFO: validating pod update-demo-nautilus-cjslr
Jan 26 11:47:57.131: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:47:57.132: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:47:57.132: INFO: update-demo-nautilus-cjslr is verified up and running
Jan 26 11:47:57.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwx4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:57.221: INFO: stderr: ""
Jan 26 11:47:57.221: INFO: stdout: "true"
Jan 26 11:47:57.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwx4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:57.359: INFO: stderr: ""
Jan 26 11:47:57.359: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 11:47:57.359: INFO: validating pod update-demo-nautilus-wwx4w
Jan 26 11:47:57.378: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 11:47:57.378: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 11:47:57.378: INFO: update-demo-nautilus-wwx4w is verified up and running
STEP: using delete to clean up resources
Jan 26 11:47:57.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:57.516: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:47:57.516: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 26 11:47:57.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-qljbc'
Jan 26 11:47:57.687: INFO: stderr: "No resources found.\n"
Jan 26 11:47:57.687: INFO: stdout: ""
Jan 26 11:47:57.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-qljbc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 26 11:47:57.874: INFO: stderr: ""
Jan 26 11:47:57.875: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:47:57.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qljbc" for this suite.
Jan 26 11:48:22.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:48:22.048: INFO: namespace: e2e-tests-kubectl-qljbc, resource: bindings, ignored listing per whitelist
Jan 26 11:48:22.182: INFO: namespace e2e-tests-kubectl-qljbc deletion completed in 24.274254091s

• [SLOW TEST:70.204 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:48:22.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-c105b695-4031-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 11:48:22.529: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-q8hrx" to be "success or failure"
Jan 26 11:48:22.555: INFO: Pod "pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.001006ms
Jan 26 11:48:24.592: INFO: Pod "pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062798704s
Jan 26 11:48:26.617: INFO: Pod "pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088696273s
Jan 26 11:48:28.932: INFO: Pod "pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403614146s
Jan 26 11:48:30.958: INFO: Pod "pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429554161s
Jan 26 11:48:32.980: INFO: Pod "pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.451468427s
STEP: Saw pod success
Jan 26 11:48:32.980: INFO: Pod "pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:48:32.994: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 11:48:33.152: INFO: Waiting for pod pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005 to disappear
Jan 26 11:48:33.163: INFO: Pod pod-projected-configmaps-c1068597-4031-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:48:33.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q8hrx" for this suite.
Jan 26 11:48:39.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:48:39.567: INFO: namespace: e2e-tests-projected-q8hrx, resource: bindings, ignored listing per whitelist
Jan 26 11:48:39.761: INFO: namespace e2e-tests-projected-q8hrx deletion completed in 6.593143874s

• [SLOW TEST:17.579 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:48:39.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-cb80dceb-4031-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 11:48:40.035: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-mzvtt" to be "success or failure"
Jan 26 11:48:40.162: INFO: Pod "pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 127.004036ms
Jan 26 11:48:42.179: INFO: Pod "pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143518621s
Jan 26 11:48:44.190: INFO: Pod "pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15444698s
Jan 26 11:48:46.202: INFO: Pod "pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166478269s
Jan 26 11:48:48.248: INFO: Pod "pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212535585s
Jan 26 11:48:50.260: INFO: Pod "pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224409125s
STEP: Saw pod success
Jan 26 11:48:50.260: INFO: Pod "pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:48:50.264: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 11:48:50.488: INFO: Waiting for pod pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005 to disappear
Jan 26 11:48:50.514: INFO: Pod pod-projected-secrets-cb82b42f-4031-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:48:50.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mzvtt" for this suite.
Jan 26 11:48:56.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:48:56.700: INFO: namespace: e2e-tests-projected-mzvtt, resource: bindings, ignored listing per whitelist
Jan 26 11:48:56.829: INFO: namespace e2e-tests-projected-mzvtt deletion completed in 6.296006302s

• [SLOW TEST:17.067 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:48:56.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:48:57.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-l8shf" to be "success or failure"
Jan 26 11:48:57.074: INFO: Pod "downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.188212ms
Jan 26 11:48:59.143: INFO: Pod "downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088152141s
Jan 26 11:49:01.161: INFO: Pod "downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10660322s
Jan 26 11:49:03.652: INFO: Pod "downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.597138377s
Jan 26 11:49:05.696: INFO: Pod "downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.64153015s
Jan 26 11:49:07.879: INFO: Pod "downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.824366692s
STEP: Saw pod success
Jan 26 11:49:07.879: INFO: Pod "downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:49:08.144: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:49:08.332: INFO: Waiting for pod downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005 to disappear
Jan 26 11:49:08.358: INFO: Pod downwardapi-volume-d5a6459e-4031-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:49:08.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l8shf" for this suite.
Jan 26 11:49:14.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:49:14.571: INFO: namespace: e2e-tests-projected-l8shf, resource: bindings, ignored listing per whitelist
Jan 26 11:49:14.663: INFO: namespace e2e-tests-projected-l8shf deletion completed in 6.292932294s

• [SLOW TEST:17.834 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:49:14.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 26 11:49:21.883: INFO: 10 pods remaining
Jan 26 11:49:21.884: INFO: 10 pods has nil DeletionTimestamp
Jan 26 11:49:21.884: INFO: 
Jan 26 11:49:22.472: INFO: 10 pods remaining
Jan 26 11:49:22.473: INFO: 10 pods has nil DeletionTimestamp
Jan 26 11:49:22.473: INFO: 
Jan 26 11:49:23.596: INFO: 0 pods remaining
Jan 26 11:49:23.596: INFO: 0 pods has nil DeletionTimestamp
Jan 26 11:49:23.596: INFO: 
STEP: Gathering metrics
W0126 11:49:24.272673       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 11:49:24.272: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:49:24.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hw29t" for this suite.
Jan 26 11:49:36.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:49:36.709: INFO: namespace: e2e-tests-gc-hw29t, resource: bindings, ignored listing per whitelist
Jan 26 11:49:36.725: INFO: namespace e2e-tests-gc-hw29t deletion completed in 12.448945016s

• [SLOW TEST:22.062 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:49:36.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-jmwll
Jan 26 11:49:47.009: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-jmwll
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 11:49:47.015: INFO: Initial restart count of pod liveness-exec is 0
Jan 26 11:50:39.891: INFO: Restart count of pod e2e-tests-container-probe-jmwll/liveness-exec is now 1 (52.875803475s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:50:40.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jmwll" for this suite.
Jan 26 11:50:48.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:50:48.169: INFO: namespace: e2e-tests-container-probe-jmwll, resource: bindings, ignored listing per whitelist
Jan 26 11:50:48.214: INFO: namespace e2e-tests-container-probe-jmwll deletion completed in 8.194878122s

• [SLOW TEST:71.488 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:50:48.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 11:50:48.630: INFO: Waiting up to 5m0s for pod "downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-tlzvg" to be "success or failure"
Jan 26 11:50:48.638: INFO: Pod "downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.798544ms
Jan 26 11:50:50.767: INFO: Pod "downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137210405s
Jan 26 11:50:52.778: INFO: Pod "downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148433297s
Jan 26 11:50:54.793: INFO: Pod "downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163346097s
Jan 26 11:50:56.826: INFO: Pod "downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196200863s
Jan 26 11:50:58.842: INFO: Pod "downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.211846887s
STEP: Saw pod success
Jan 26 11:50:58.842: INFO: Pod "downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:50:58.848: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 11:50:59.513: INFO: Waiting for pod downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005 to disappear
Jan 26 11:50:59.847: INFO: Pod downwardapi-volume-181a8eeb-4032-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:50:59.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tlzvg" for this suite.
Jan 26 11:51:05.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:51:06.045: INFO: namespace: e2e-tests-downward-api-tlzvg, resource: bindings, ignored listing per whitelist
Jan 26 11:51:06.100: INFO: namespace e2e-tests-downward-api-tlzvg deletion completed in 6.220146567s

• [SLOW TEST:17.885 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:51:06.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 26 11:51:06.322: INFO: namespace e2e-tests-kubectl-52dkg
Jan 26 11:51:06.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-52dkg'
Jan 26 11:51:06.798: INFO: stderr: ""
Jan 26 11:51:06.798: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 26 11:51:08.309: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:08.309: INFO: Found 0 / 1
Jan 26 11:51:08.825: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:08.825: INFO: Found 0 / 1
Jan 26 11:51:09.959: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:09.959: INFO: Found 0 / 1
Jan 26 11:51:10.817: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:10.817: INFO: Found 0 / 1
Jan 26 11:51:11.876: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:11.876: INFO: Found 0 / 1
Jan 26 11:51:13.628: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:13.629: INFO: Found 0 / 1
Jan 26 11:51:14.073: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:14.073: INFO: Found 0 / 1
Jan 26 11:51:14.809: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:14.809: INFO: Found 0 / 1
Jan 26 11:51:15.814: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:15.814: INFO: Found 0 / 1
Jan 26 11:51:16.815: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:16.815: INFO: Found 0 / 1
Jan 26 11:51:17.819: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:17.819: INFO: Found 1 / 1
Jan 26 11:51:17.819: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 26 11:51:17.829: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:51:17.829: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 26 11:51:17.829: INFO: wait on redis-master startup in e2e-tests-kubectl-52dkg 
Jan 26 11:51:17.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wzfd4 redis-master --namespace=e2e-tests-kubectl-52dkg'
Jan 26 11:51:18.023: INFO: stderr: ""
Jan 26 11:51:18.023: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Jan 11:51:15.993 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Jan 11:51:15.994 # Server started, Redis version 3.2.12\n1:M 26 Jan 11:51:15.994 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Jan 11:51:15.994 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 26 11:51:18.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-52dkg'
Jan 26 11:51:18.304: INFO: stderr: ""
Jan 26 11:51:18.304: INFO: stdout: "service/rm2 exposed\n"
Jan 26 11:51:18.314: INFO: Service rm2 in namespace e2e-tests-kubectl-52dkg found.
STEP: exposing service
Jan 26 11:51:20.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-52dkg'
Jan 26 11:51:20.637: INFO: stderr: ""
Jan 26 11:51:20.637: INFO: stdout: "service/rm3 exposed\n"
Jan 26 11:51:20.681: INFO: Service rm3 in namespace e2e-tests-kubectl-52dkg found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:51:22.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-52dkg" for this suite.
Jan 26 11:51:46.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:51:46.839: INFO: namespace: e2e-tests-kubectl-52dkg, resource: bindings, ignored listing per whitelist
Jan 26 11:51:46.870: INFO: namespace e2e-tests-kubectl-52dkg deletion completed in 24.150086499s

• [SLOW TEST:40.770 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:51:46.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-3b024015-4032-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 11:51:47.321: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-s59bc" to be "success or failure"
Jan 26 11:51:47.355: INFO: Pod "pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.905418ms
Jan 26 11:51:49.371: INFO: Pod "pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049974566s
Jan 26 11:51:51.395: INFO: Pod "pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07298096s
Jan 26 11:51:53.650: INFO: Pod "pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.32809451s
Jan 26 11:51:55.666: INFO: Pod "pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344054823s
Jan 26 11:51:58.159: INFO: Pod "pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.837742051s
STEP: Saw pod success
Jan 26 11:51:58.159: INFO: Pod "pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:51:58.181: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 26 11:51:58.529: INFO: Waiting for pod pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005 to disappear
Jan 26 11:51:58.545: INFO: Pod pod-projected-secrets-3b03283b-4032-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:51:58.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-s59bc" for this suite.
Jan 26 11:52:06.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:52:06.826: INFO: namespace: e2e-tests-projected-s59bc, resource: bindings, ignored listing per whitelist
Jan 26 11:52:06.855: INFO: namespace e2e-tests-projected-s59bc deletion completed in 8.29044876s

• [SLOW TEST:19.984 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:52:06.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 26 11:52:27.375: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 11:52:27.453: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 11:52:29.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 11:52:29.601: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 11:52:31.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 11:52:31.526: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 11:52:33.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 11:52:33.824: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 11:52:35.453: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 11:52:35.477: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:52:35.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7qz56" for this suite.
Jan 26 11:52:59.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:52:59.764: INFO: namespace: e2e-tests-container-lifecycle-hook-7qz56, resource: bindings, ignored listing per whitelist
Jan 26 11:52:59.807: INFO: namespace e2e-tests-container-lifecycle-hook-7qz56 deletion completed in 24.267192134s

• [SLOW TEST:52.951 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:52:59.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-66763a48-4032-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 11:53:00.059: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-h78rq" to be "success or failure"
Jan 26 11:53:00.064: INFO: Pod "pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.189528ms
Jan 26 11:53:02.079: INFO: Pod "pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020662194s
Jan 26 11:53:04.115: INFO: Pod "pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055669969s
Jan 26 11:53:06.344: INFO: Pod "pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285008252s
Jan 26 11:53:08.369: INFO: Pod "pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.310510319s
Jan 26 11:53:10.385: INFO: Pod "pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.326599149s
STEP: Saw pod success
Jan 26 11:53:10.386: INFO: Pod "pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:53:10.394: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 11:53:10.576: INFO: Waiting for pod pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005 to disappear
Jan 26 11:53:10.607: INFO: Pod pod-projected-configmaps-66772f3a-4032-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:53:10.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h78rq" for this suite.
Jan 26 11:53:16.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:53:16.833: INFO: namespace: e2e-tests-projected-h78rq, resource: bindings, ignored listing per whitelist
Jan 26 11:53:16.940: INFO: namespace e2e-tests-projected-h78rq deletion completed in 6.316137563s

• [SLOW TEST:17.133 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:53:16.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 26 11:53:17.198: INFO: Waiting up to 5m0s for pod "client-containers-70acf088-4032-11ea-b664-0242ac110005" in namespace "e2e-tests-containers-vl7z9" to be "success or failure"
Jan 26 11:53:17.206: INFO: Pod "client-containers-70acf088-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.283951ms
Jan 26 11:53:19.440: INFO: Pod "client-containers-70acf088-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242283823s
Jan 26 11:53:21.469: INFO: Pod "client-containers-70acf088-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27154178s
Jan 26 11:53:23.487: INFO: Pod "client-containers-70acf088-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28915799s
Jan 26 11:53:26.021: INFO: Pod "client-containers-70acf088-4032-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.82295035s
Jan 26 11:53:28.158: INFO: Pod "client-containers-70acf088-4032-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.960237601s
STEP: Saw pod success
Jan 26 11:53:28.158: INFO: Pod "client-containers-70acf088-4032-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 11:53:28.179: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-70acf088-4032-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 11:53:28.682: INFO: Waiting for pod client-containers-70acf088-4032-11ea-b664-0242ac110005 to disappear
Jan 26 11:53:28.704: INFO: Pod client-containers-70acf088-4032-11ea-b664-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:53:28.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-vl7z9" for this suite.
Jan 26 11:53:34.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:53:34.985: INFO: namespace: e2e-tests-containers-vl7z9, resource: bindings, ignored listing per whitelist
Jan 26 11:53:35.228: INFO: namespace e2e-tests-containers-vl7z9 deletion completed in 6.509532007s

• [SLOW TEST:18.287 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:53:35.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 11:54:08.150: INFO: Container started at 2020-01-26 11:53:43 +0000 UTC, pod became ready at 2020-01-26 11:54:06 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:54:08.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mkr9f" for this suite.
Jan 26 11:54:32.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:54:32.386: INFO: namespace: e2e-tests-container-probe-mkr9f, resource: bindings, ignored listing per whitelist
Jan 26 11:54:32.503: INFO: namespace e2e-tests-container-probe-mkr9f deletion completed in 24.342968525s

• [SLOW TEST:57.274 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:54:32.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fn9m9
Jan 26 11:54:42.893: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fn9m9
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 11:54:42.903: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:58:43.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fn9m9" for this suite.
Jan 26 11:58:49.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:58:50.107: INFO: namespace: e2e-tests-container-probe-fn9m9, resource: bindings, ignored listing per whitelist
Jan 26 11:58:50.136: INFO: namespace e2e-tests-container-probe-fn9m9 deletion completed in 6.363770831s

• [SLOW TEST:257.632 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:58:50.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan 26 11:58:50.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ggvc6'
Jan 26 11:58:52.693: INFO: stderr: ""
Jan 26 11:58:52.694: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan 26 11:58:54.017: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:58:54.017: INFO: Found 0 / 1
Jan 26 11:58:54.717: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:58:54.717: INFO: Found 0 / 1
Jan 26 11:58:55.831: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:58:55.831: INFO: Found 0 / 1
Jan 26 11:58:56.707: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:58:56.708: INFO: Found 0 / 1
Jan 26 11:58:58.973: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:58:58.974: INFO: Found 0 / 1
Jan 26 11:59:00.081: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:59:00.081: INFO: Found 0 / 1
Jan 26 11:59:00.710: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:59:00.710: INFO: Found 0 / 1
Jan 26 11:59:01.728: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:59:01.728: INFO: Found 1 / 1
Jan 26 11:59:01.728: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 26 11:59:01.746: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 11:59:01.746: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 26 11:59:01.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bb7dp redis-master --namespace=e2e-tests-kubectl-ggvc6'
Jan 26 11:59:01.949: INFO: stderr: ""
Jan 26 11:59:01.949: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Jan 11:59:00.993 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Jan 11:59:00.993 # Server started, Redis version 3.2.12\n1:M 26 Jan 11:59:00.994 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Jan 11:59:00.994 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 26 11:59:01.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb7dp redis-master --namespace=e2e-tests-kubectl-ggvc6 --tail=1'
Jan 26 11:59:02.106: INFO: stderr: ""
Jan 26 11:59:02.107: INFO: stdout: "1:M 26 Jan 11:59:00.994 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 26 11:59:02.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb7dp redis-master --namespace=e2e-tests-kubectl-ggvc6 --limit-bytes=1'
Jan 26 11:59:02.265: INFO: stderr: ""
Jan 26 11:59:02.265: INFO: stdout: " "
STEP: exposing timestamps
Jan 26 11:59:02.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb7dp redis-master --namespace=e2e-tests-kubectl-ggvc6 --tail=1 --timestamps'
Jan 26 11:59:02.430: INFO: stderr: ""
Jan 26 11:59:02.430: INFO: stdout: "2020-01-26T11:59:00.994987348Z 1:M 26 Jan 11:59:00.994 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 26 11:59:04.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb7dp redis-master --namespace=e2e-tests-kubectl-ggvc6 --since=1s'
Jan 26 11:59:05.126: INFO: stderr: ""
Jan 26 11:59:05.127: INFO: stdout: ""
Jan 26 11:59:05.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb7dp redis-master --namespace=e2e-tests-kubectl-ggvc6 --since=24h'
Jan 26 11:59:05.309: INFO: stderr: ""
Jan 26 11:59:05.309: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Jan 11:59:00.993 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Jan 11:59:00.993 # Server started, Redis version 3.2.12\n1:M 26 Jan 11:59:00.994 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Jan 11:59:00.994 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan 26 11:59:05.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ggvc6'
Jan 26 11:59:05.523: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 11:59:05.523: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 26 11:59:05.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-ggvc6'
Jan 26 11:59:05.688: INFO: stderr: "No resources found.\n"
Jan 26 11:59:05.688: INFO: stdout: ""
Jan 26 11:59:05.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-ggvc6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 26 11:59:05.800: INFO: stderr: ""
Jan 26 11:59:05.800: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:59:05.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ggvc6" for this suite.
Jan 26 11:59:29.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:59:30.087: INFO: namespace: e2e-tests-kubectl-ggvc6, resource: bindings, ignored listing per whitelist
Jan 26 11:59:30.126: INFO: namespace e2e-tests-kubectl-ggvc6 deletion completed in 24.298986851s

• [SLOW TEST:39.989 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:59:30.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 11:59:30.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hzld5" for this suite.
Jan 26 11:59:42.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 11:59:42.546: INFO: namespace: e2e-tests-pods-hzld5, resource: bindings, ignored listing per whitelist
Jan 26 11:59:42.635: INFO: namespace e2e-tests-pods-hzld5 deletion completed in 12.358318644s

• [SLOW TEST:12.508 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 11:59:42.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 26 11:59:42.730: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:00:00.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-84cd8" for this suite.
Jan 26 12:00:08.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:00:08.429: INFO: namespace: e2e-tests-init-container-84cd8, resource: bindings, ignored listing per whitelist
Jan 26 12:00:08.444: INFO: namespace e2e-tests-init-container-84cd8 deletion completed in 8.243122715s

• [SLOW TEST:25.809 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:00:08.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-wnx2k
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wnx2k to expose endpoints map[]
Jan 26 12:00:09.020: INFO: Get endpoints failed (104.501892ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 26 12:00:10.036: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wnx2k exposes endpoints map[] (1.12026578s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-wnx2k
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wnx2k to expose endpoints map[pod1:[80]]
Jan 26 12:00:15.391: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.324747605s elapsed, will retry)
Jan 26 12:00:20.872: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wnx2k exposes endpoints map[pod1:[80]] (10.80585012s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-wnx2k
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wnx2k to expose endpoints map[pod1:[80] pod2:[80]]
Jan 26 12:00:25.289: INFO: Unexpected endpoints: found map[66cd6d08-4033-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.410864295s elapsed, will retry)
Jan 26 12:00:28.717: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wnx2k exposes endpoints map[pod1:[80] pod2:[80]] (7.838379429s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-wnx2k
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wnx2k to expose endpoints map[pod2:[80]]
Jan 26 12:00:29.905: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wnx2k exposes endpoints map[pod2:[80]] (1.162917485s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-wnx2k
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wnx2k to expose endpoints map[]
Jan 26 12:00:31.374: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wnx2k exposes endpoints map[] (1.455102999s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:00:31.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-wnx2k" for this suite.
Jan 26 12:00:55.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:00:56.200: INFO: namespace: e2e-tests-services-wnx2k, resource: bindings, ignored listing per whitelist
Jan 26 12:00:56.205: INFO: namespace e2e-tests-services-wnx2k deletion completed in 24.273498577s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:47.761 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:00:56.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 26 12:00:56.331: INFO: Waiting up to 5m0s for pod "var-expansion-826274f5-4033-11ea-b664-0242ac110005" in namespace "e2e-tests-var-expansion-75xvx" to be "success or failure"
Jan 26 12:00:56.341: INFO: Pod "var-expansion-826274f5-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.712947ms
Jan 26 12:00:58.369: INFO: Pod "var-expansion-826274f5-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037651989s
Jan 26 12:01:00.381: INFO: Pod "var-expansion-826274f5-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049564977s
Jan 26 12:01:02.396: INFO: Pod "var-expansion-826274f5-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064829265s
Jan 26 12:01:04.406: INFO: Pod "var-expansion-826274f5-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074102977s
Jan 26 12:01:06.416: INFO: Pod "var-expansion-826274f5-4033-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084972278s
STEP: Saw pod success
Jan 26 12:01:06.417: INFO: Pod "var-expansion-826274f5-4033-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:01:06.422: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-826274f5-4033-11ea-b664-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 26 12:01:06.542: INFO: Waiting for pod var-expansion-826274f5-4033-11ea-b664-0242ac110005 to disappear
Jan 26 12:01:06.626: INFO: Pod var-expansion-826274f5-4033-11ea-b664-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:01:06.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-75xvx" for this suite.
Jan 26 12:01:12.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:01:12.754: INFO: namespace: e2e-tests-var-expansion-75xvx, resource: bindings, ignored listing per whitelist
Jan 26 12:01:12.791: INFO: namespace e2e-tests-var-expansion-75xvx deletion completed in 6.154792454s

• [SLOW TEST:16.585 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:01:12.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 12:01:13.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-gtm4r" to be "success or failure"
Jan 26 12:01:13.302: INFO: Pod "downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 66.568631ms
Jan 26 12:01:15.318: INFO: Pod "downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081997469s
Jan 26 12:01:17.410: INFO: Pod "downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173926418s
Jan 26 12:01:19.599: INFO: Pod "downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363136639s
Jan 26 12:01:21.639: INFO: Pod "downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.403236939s
Jan 26 12:01:23.662: INFO: Pod "downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.426537793s
STEP: Saw pod success
Jan 26 12:01:23.662: INFO: Pod "downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:01:23.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 12:01:23.912: INFO: Waiting for pod downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005 to disappear
Jan 26 12:01:23.920: INFO: Pod downwardapi-volume-8c7454b0-4033-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:01:23.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gtm4r" for this suite.
Jan 26 12:01:30.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:01:30.242: INFO: namespace: e2e-tests-downward-api-gtm4r, resource: bindings, ignored listing per whitelist
Jan 26 12:01:30.351: INFO: namespace e2e-tests-downward-api-gtm4r deletion completed in 6.423792171s

• [SLOW TEST:17.559 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:01:30.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 26 12:01:38.893: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan 26 12:03:09.035: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-5jzsw".
STEP: Found 0 events.
Jan 26 12:03:09.072: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:03:09.072: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:01:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:01:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:01:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:01:38 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:11:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:11:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan 26 12:03:09.072: INFO: 
Jan 26 12:03:09.078: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan 26 12:03:09.084: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:19519544,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-26 12:03:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-26 12:03:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-26 12:03:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-26 12:03:00 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717 nginx:latest] 126698067} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan 26 12:03:09.084: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan 26 12:03:09.089: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan 26 12:03:09.108: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 26 12:03:09.108: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan 26 12:03:09.108: INFO: 	Container weave ready: true, restart count 0
Jan 26 12:03:09.108: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 12:03:09.108: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 26 12:03:09.108: INFO: 	Container coredns ready: true, restart count 0
Jan 26 12:03:09.108: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 26 12:03:09.108: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 26 12:03:09.108: INFO: test-pod-uninitialized started at 2020-01-26 12:01:39 +0000 UTC (0+1 container statuses recorded)
Jan 26 12:03:09.108: INFO: 	Container nginx ready: true, restart count 0
Jan 26 12:03:09.108: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 26 12:03:09.108: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 26 12:03:09.108: INFO: 	Container coredns ready: true, restart count 0
Jan 26 12:03:09.108: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan 26 12:03:09.108: INFO: 	Container kube-proxy ready: true, restart count 0
W0126 12:03:09.114492       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 12:03:09.189: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan 26 12:03:09.189: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.031594s}
Jan 26 12:03:09.189: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.019893s}
Jan 26 12:03:09.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-5jzsw" for this suite.
Jan 26 12:03:15.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:03:15.368: INFO: namespace: e2e-tests-namespaces-5jzsw, resource: bindings, ignored listing per whitelist
Jan 26 12:03:15.476: INFO: namespace e2e-tests-namespaces-5jzsw deletion completed in 6.273948669s
STEP: Destroying namespace "e2e-tests-nsdeletetest-v8nzn" for this suite.
Jan 26 12:03:15.481: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-v8nzn": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-v8nzn": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-v8nzn\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc001d0ede0), Code:409}})

• Failure [105.131 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000a18a0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:03:15.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 26 12:03:25.986: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d585ae77-4033-11ea-b664-0242ac110005,GenerateName:,Namespace:e2e-tests-events-ppgkk,SelfLink:/api/v1/namespaces/e2e-tests-events-ppgkk/pods/send-events-d585ae77-4033-11ea-b664-0242ac110005,UID:d5969cb4-4033-11ea-a994-fa163e34d433,ResourceVersion:19519588,Generation:0,CreationTimestamp:2020-01-26 12:03:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 803927222,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vdzl9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vdzl9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-vdzl9 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020410c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020410f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:03:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:03:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:03:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:03:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-26 12:03:16 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-26 12:03:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://5dc8c83371e27c6f8b12db80b5246881ee81255387f32279884e64cdfda39ed1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 26 12:03:28.003: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 26 12:03:30.021: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:03:30.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-ppgkk" for this suite.
Jan 26 12:04:16.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:04:16.214: INFO: namespace: e2e-tests-events-ppgkk, resource: bindings, ignored listing per whitelist
Jan 26 12:04:16.250: INFO: namespace e2e-tests-events-ppgkk deletion completed in 46.178781757s

• [SLOW TEST:60.767 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:04:16.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 26 12:04:16.621: INFO: Number of nodes with available pods: 0
Jan 26 12:04:16.621: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:17.656: INFO: Number of nodes with available pods: 0
Jan 26 12:04:17.656: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:18.716: INFO: Number of nodes with available pods: 0
Jan 26 12:04:18.716: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:19.662: INFO: Number of nodes with available pods: 0
Jan 26 12:04:19.662: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:20.663: INFO: Number of nodes with available pods: 0
Jan 26 12:04:20.663: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:21.683: INFO: Number of nodes with available pods: 0
Jan 26 12:04:21.683: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:23.652: INFO: Number of nodes with available pods: 0
Jan 26 12:04:23.652: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:24.640: INFO: Number of nodes with available pods: 0
Jan 26 12:04:24.640: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:25.645: INFO: Number of nodes with available pods: 0
Jan 26 12:04:25.645: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:04:26.637: INFO: Number of nodes with available pods: 1
Jan 26 12:04:26.637: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 26 12:04:26.726: INFO: Number of nodes with available pods: 1
Jan 26 12:04:26.726: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9jf5t, will wait for the garbage collector to delete the pods
Jan 26 12:04:28.194: INFO: Deleting DaemonSet.extensions daemon-set took: 20.124046ms
Jan 26 12:04:28.895: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.634548ms
Jan 26 12:04:35.703: INFO: Number of nodes with available pods: 0
Jan 26 12:04:35.703: INFO: Number of running nodes: 0, number of available pods: 0
Jan 26 12:04:35.756: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9jf5t/daemonsets","resourceVersion":"19519724"},"items":null}

Jan 26 12:04:35.763: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9jf5t/pods","resourceVersion":"19519724"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:04:35.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-9jf5t" for this suite.
Jan 26 12:04:41.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:04:42.019: INFO: namespace: e2e-tests-daemonsets-9jf5t, resource: bindings, ignored listing per whitelist
Jan 26 12:04:42.062: INFO: namespace e2e-tests-daemonsets-9jf5t deletion completed in 6.271057587s

• [SLOW TEST:25.812 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:04:42.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0126 12:05:22.877662       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 12:05:22.877: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:05:22.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-dx8z6" for this suite.
Jan 26 12:05:38.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:05:39.157: INFO: namespace: e2e-tests-gc-dx8z6, resource: bindings, ignored listing per whitelist
Jan 26 12:05:39.183: INFO: namespace e2e-tests-gc-dx8z6 deletion completed in 16.299407916s

• [SLOW TEST:57.121 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:05:39.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 26 12:05:40.856: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 26 12:05:46.009: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:05:47.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-mp7jj" for this suite.
Jan 26 12:06:00.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:06:00.251: INFO: namespace: e2e-tests-replication-controller-mp7jj, resource: bindings, ignored listing per whitelist
Jan 26 12:06:00.352: INFO: namespace e2e-tests-replication-controller-mp7jj deletion completed in 12.867060773s

• [SLOW TEST:21.169 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:06:00.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 26 12:06:18.937: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 12:06:18.962: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 12:06:20.962: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 12:06:20.987: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 12:06:22.962: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 12:06:22.998: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 12:06:24.962: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 12:06:24.971: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 12:06:26.962: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 12:06:26.994: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 12:06:28.962: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 12:06:29.062: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:06:29.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cdj7j" for this suite.
Jan 26 12:06:53.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:06:53.267: INFO: namespace: e2e-tests-container-lifecycle-hook-cdj7j, resource: bindings, ignored listing per whitelist
Jan 26 12:06:53.298: INFO: namespace e2e-tests-container-lifecycle-hook-cdj7j deletion completed in 24.225505096s

• [SLOW TEST:52.945 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:06:53.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5745870e-4034-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 12:06:53.512: INFO: Waiting up to 5m0s for pod "pod-secrets-5746ca17-4034-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-5pm6q" to be "success or failure"
Jan 26 12:06:53.536: INFO: Pod "pod-secrets-5746ca17-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.802129ms
Jan 26 12:06:55.556: INFO: Pod "pod-secrets-5746ca17-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044105859s
Jan 26 12:06:57.602: INFO: Pod "pod-secrets-5746ca17-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090194331s
Jan 26 12:06:59.930: INFO: Pod "pod-secrets-5746ca17-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417360784s
Jan 26 12:07:01.941: INFO: Pod "pod-secrets-5746ca17-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429124791s
Jan 26 12:07:03.965: INFO: Pod "pod-secrets-5746ca17-4034-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.453025716s
STEP: Saw pod success
Jan 26 12:07:03.965: INFO: Pod "pod-secrets-5746ca17-4034-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:07:03.973: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5746ca17-4034-11ea-b664-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 26 12:07:04.635: INFO: Waiting for pod pod-secrets-5746ca17-4034-11ea-b664-0242ac110005 to disappear
Jan 26 12:07:04.868: INFO: Pod pod-secrets-5746ca17-4034-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:07:04.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5pm6q" for this suite.
Jan 26 12:07:10.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:07:11.042: INFO: namespace: e2e-tests-secrets-5pm6q, resource: bindings, ignored listing per whitelist
Jan 26 12:07:11.051: INFO: namespace e2e-tests-secrets-5pm6q deletion completed in 6.173128896s

• [SLOW TEST:17.753 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:07:11.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-61dd803d-4034-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 12:07:11.285: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-zvj7b" to be "success or failure"
Jan 26 12:07:11.328: INFO: Pod "pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.414816ms
Jan 26 12:07:13.340: INFO: Pod "pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055469563s
Jan 26 12:07:15.350: INFO: Pod "pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06513801s
Jan 26 12:07:17.376: INFO: Pod "pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091372994s
Jan 26 12:07:19.405: INFO: Pod "pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119955081s
Jan 26 12:07:21.475: INFO: Pod "pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190453632s
STEP: Saw pod success
Jan 26 12:07:21.475: INFO: Pod "pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:07:21.572: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 12:07:21.763: INFO: Waiting for pod pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005 to disappear
Jan 26 12:07:21.808: INFO: Pod pod-projected-configmaps-61de7db3-4034-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:07:21.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zvj7b" for this suite.
Jan 26 12:07:30.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:07:30.196: INFO: namespace: e2e-tests-projected-zvj7b, resource: bindings, ignored listing per whitelist
Jan 26 12:07:30.232: INFO: namespace e2e-tests-projected-zvj7b deletion completed in 8.397749387s

• [SLOW TEST:19.181 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:07:30.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 26 12:07:40.672: INFO: Pod pod-hostip-6d584ec0-4034-11ea-b664-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:07:40.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4ptqf" for this suite.
Jan 26 12:08:04.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:08:04.982: INFO: namespace: e2e-tests-pods-4ptqf, resource: bindings, ignored listing per whitelist
Jan 26 12:08:05.005: INFO: namespace e2e-tests-pods-4ptqf deletion completed in 24.314634639s

• [SLOW TEST:34.773 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:08:05.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 26 12:08:05.242: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-c4nn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-c4nn9/configmaps/e2e-watch-test-resource-version,UID:81fa6557-4034-11ea-a994-fa163e34d433,ResourceVersion:19520324,Generation:0,CreationTimestamp:2020-01-26 12:08:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 12:08:05.242: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-c4nn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-c4nn9/configmaps/e2e-watch-test-resource-version,UID:81fa6557-4034-11ea-a994-fa163e34d433,ResourceVersion:19520325,Generation:0,CreationTimestamp:2020-01-26 12:08:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:08:05.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-c4nn9" for this suite.
Jan 26 12:08:11.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:08:11.497: INFO: namespace: e2e-tests-watch-c4nn9, resource: bindings, ignored listing per whitelist
Jan 26 12:08:11.497: INFO: namespace e2e-tests-watch-c4nn9 deletion completed in 6.248556688s

• [SLOW TEST:6.491 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:08:11.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-85e566ca-4034-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 12:08:11.768: INFO: Waiting up to 5m0s for pod "pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-fqs4b" to be "success or failure"
Jan 26 12:08:11.809: INFO: Pod "pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.75916ms
Jan 26 12:08:13.843: INFO: Pod "pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074829827s
Jan 26 12:08:15.864: INFO: Pod "pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095860371s
Jan 26 12:08:18.205: INFO: Pod "pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437303909s
Jan 26 12:08:20.220: INFO: Pod "pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45197151s
Jan 26 12:08:22.232: INFO: Pod "pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.464441947s
STEP: Saw pod success
Jan 26 12:08:22.232: INFO: Pod "pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:08:22.236: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 26 12:08:23.059: INFO: Waiting for pod pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005 to disappear
Jan 26 12:08:23.079: INFO: Pod pod-configmaps-85e6925f-4034-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:08:23.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fqs4b" for this suite.
Jan 26 12:08:29.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:08:29.419: INFO: namespace: e2e-tests-configmap-fqs4b, resource: bindings, ignored listing per whitelist
Jan 26 12:08:29.457: INFO: namespace e2e-tests-configmap-fqs4b deletion completed in 6.36017288s

• [SLOW TEST:17.960 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:08:29.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-909ee0b2-4034-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 12:08:29.724: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-xdmwf" to be "success or failure"
Jan 26 12:08:29.737: INFO: Pod "pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.184165ms
Jan 26 12:08:31.833: INFO: Pod "pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108455907s
Jan 26 12:08:33.860: INFO: Pod "pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135765267s
Jan 26 12:08:35.876: INFO: Pod "pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151997094s
Jan 26 12:08:37.892: INFO: Pod "pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167749774s
Jan 26 12:08:39.909: INFO: Pod "pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1843834s
STEP: Saw pod success
Jan 26 12:08:39.909: INFO: Pod "pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:08:39.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 12:08:40.077: INFO: Waiting for pod pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005 to disappear
Jan 26 12:08:40.086: INFO: Pod pod-projected-configmaps-90a0520c-4034-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:08:40.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xdmwf" for this suite.
Jan 26 12:08:46.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:08:46.328: INFO: namespace: e2e-tests-projected-xdmwf, resource: bindings, ignored listing per whitelist
Jan 26 12:08:46.356: INFO: namespace e2e-tests-projected-xdmwf deletion completed in 6.258322809s

• [SLOW TEST:16.899 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:08:46.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-cdpq7/configmap-test-9aa8e4aa-4034-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 12:08:46.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-cdpq7" to be "success or failure"
Jan 26 12:08:46.637: INFO: Pod "pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.532354ms
Jan 26 12:08:48.649: INFO: Pod "pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023735261s
Jan 26 12:08:50.684: INFO: Pod "pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059045301s
Jan 26 12:08:52.859: INFO: Pod "pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233224445s
Jan 26 12:08:54.876: INFO: Pod "pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.250262537s
Jan 26 12:08:56.913: INFO: Pod "pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.287717389s
STEP: Saw pod success
Jan 26 12:08:56.913: INFO: Pod "pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:08:56.922: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005 container env-test: 
STEP: delete the pod
Jan 26 12:08:57.005: INFO: Waiting for pod pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005 to disappear
Jan 26 12:08:57.074: INFO: Pod pod-configmaps-9ab166b0-4034-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:08:57.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cdpq7" for this suite.
Jan 26 12:09:03.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:09:03.244: INFO: namespace: e2e-tests-configmap-cdpq7, resource: bindings, ignored listing per whitelist
Jan 26 12:09:03.431: INFO: namespace e2e-tests-configmap-cdpq7 deletion completed in 6.345197237s

• [SLOW TEST:17.074 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:09:03.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-k4nw
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 12:09:03.761: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-k4nw" in namespace "e2e-tests-subpath-z7sbg" to be "success or failure"
Jan 26 12:09:03.878: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Pending", Reason="", readiness=false. Elapsed: 116.106108ms
Jan 26 12:09:05.917: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15558308s
Jan 26 12:09:07.950: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188819206s
Jan 26 12:09:10.458: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.696449575s
Jan 26 12:09:12.510: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748530066s
Jan 26 12:09:14.533: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.77122522s
Jan 26 12:09:16.662: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.900980986s
Jan 26 12:09:18.678: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.91611259s
Jan 26 12:09:20.705: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 16.943269934s
Jan 26 12:09:22.767: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 19.005941979s
Jan 26 12:09:24.785: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 21.023908205s
Jan 26 12:09:26.808: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 23.046497892s
Jan 26 12:09:28.828: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 25.066596777s
Jan 26 12:09:30.852: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 27.090302866s
Jan 26 12:09:32.873: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 29.111608425s
Jan 26 12:09:34.888: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 31.126981384s
Jan 26 12:09:36.903: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 33.141159718s
Jan 26 12:09:38.931: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Running", Reason="", readiness=false. Elapsed: 35.16978105s
Jan 26 12:09:41.646: INFO: Pod "pod-subpath-test-secret-k4nw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.884488026s
STEP: Saw pod success
Jan 26 12:09:41.646: INFO: Pod "pod-subpath-test-secret-k4nw" satisfied condition "success or failure"
Jan 26 12:09:41.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-k4nw container test-container-subpath-secret-k4nw: 
STEP: delete the pod
Jan 26 12:09:41.861: INFO: Waiting for pod pod-subpath-test-secret-k4nw to disappear
Jan 26 12:09:41.880: INFO: Pod pod-subpath-test-secret-k4nw no longer exists
STEP: Deleting pod pod-subpath-test-secret-k4nw
Jan 26 12:09:41.880: INFO: Deleting pod "pod-subpath-test-secret-k4nw" in namespace "e2e-tests-subpath-z7sbg"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:09:41.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-z7sbg" for this suite.
Jan 26 12:09:49.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:09:50.018: INFO: namespace: e2e-tests-subpath-z7sbg, resource: bindings, ignored listing per whitelist
Jan 26 12:09:50.174: INFO: namespace e2e-tests-subpath-z7sbg deletion completed in 8.219681132s

• [SLOW TEST:46.743 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:09:50.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 26 12:09:50.481: INFO: Waiting up to 5m0s for pod "pod-c0c00fe0-4034-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-rtmk6" to be "success or failure"
Jan 26 12:09:50.491: INFO: Pod "pod-c0c00fe0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.747916ms
Jan 26 12:09:52.510: INFO: Pod "pod-c0c00fe0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028466575s
Jan 26 12:09:54.530: INFO: Pod "pod-c0c00fe0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049073234s
Jan 26 12:09:56.657: INFO: Pod "pod-c0c00fe0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175679707s
Jan 26 12:09:58.669: INFO: Pod "pod-c0c00fe0-4034-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18819625s
Jan 26 12:10:00.688: INFO: Pod "pod-c0c00fe0-4034-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.206579147s
STEP: Saw pod success
Jan 26 12:10:00.688: INFO: Pod "pod-c0c00fe0-4034-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:10:00.693: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c0c00fe0-4034-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 12:10:00.790: INFO: Waiting for pod pod-c0c00fe0-4034-11ea-b664-0242ac110005 to disappear
Jan 26 12:10:00.805: INFO: Pod pod-c0c00fe0-4034-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:10:00.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rtmk6" for this suite.
Jan 26 12:10:06.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:10:07.109: INFO: namespace: e2e-tests-emptydir-rtmk6, resource: bindings, ignored listing per whitelist
Jan 26 12:10:07.182: INFO: namespace e2e-tests-emptydir-rtmk6 deletion completed in 6.368693755s

• [SLOW TEST:17.008 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:10:07.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:10:07.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ts7xx" for this suite.
Jan 26 12:10:13.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:10:14.018: INFO: namespace: e2e-tests-kubelet-test-ts7xx, resource: bindings, ignored listing per whitelist
Jan 26 12:10:14.145: INFO: namespace e2e-tests-kubelet-test-ts7xx deletion completed in 6.352765938s

• [SLOW TEST:6.962 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:10:14.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wwg8l
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 26 12:10:14.352: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 26 12:10:50.800: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-wwg8l PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 12:10:50.800: INFO: >>> kubeConfig: /root/.kube/config
I0126 12:10:50.950540       8 log.go:172] (0xc000b0b600) (0xc0027fc780) Create stream
I0126 12:10:50.950688       8 log.go:172] (0xc000b0b600) (0xc0027fc780) Stream added, broadcasting: 1
I0126 12:10:50.959407       8 log.go:172] (0xc000b0b600) Reply frame received for 1
I0126 12:10:50.959478       8 log.go:172] (0xc000b0b600) (0xc0022dc000) Create stream
I0126 12:10:50.959513       8 log.go:172] (0xc000b0b600) (0xc0022dc000) Stream added, broadcasting: 3
I0126 12:10:50.961521       8 log.go:172] (0xc000b0b600) Reply frame received for 3
I0126 12:10:50.961546       8 log.go:172] (0xc000b0b600) (0xc0022d0000) Create stream
I0126 12:10:50.961562       8 log.go:172] (0xc000b0b600) (0xc0022d0000) Stream added, broadcasting: 5
I0126 12:10:50.962947       8 log.go:172] (0xc000b0b600) Reply frame received for 5
I0126 12:10:51.135897       8 log.go:172] (0xc000b0b600) Data frame received for 3
I0126 12:10:51.135959       8 log.go:172] (0xc0022dc000) (3) Data frame handling
I0126 12:10:51.135975       8 log.go:172] (0xc0022dc000) (3) Data frame sent
I0126 12:10:51.271789       8 log.go:172] (0xc000b0b600) Data frame received for 1
I0126 12:10:51.271917       8 log.go:172] (0xc000b0b600) (0xc0022dc000) Stream removed, broadcasting: 3
I0126 12:10:51.271954       8 log.go:172] (0xc0027fc780) (1) Data frame handling
I0126 12:10:51.271969       8 log.go:172] (0xc0027fc780) (1) Data frame sent
I0126 12:10:51.272020       8 log.go:172] (0xc000b0b600) (0xc0022d0000) Stream removed, broadcasting: 5
I0126 12:10:51.272056       8 log.go:172] (0xc000b0b600) (0xc0027fc780) Stream removed, broadcasting: 1
I0126 12:10:51.272073       8 log.go:172] (0xc000b0b600) Go away received
I0126 12:10:51.272278       8 log.go:172] (0xc000b0b600) (0xc0027fc780) Stream removed, broadcasting: 1
I0126 12:10:51.272294       8 log.go:172] (0xc000b0b600) (0xc0022dc000) Stream removed, broadcasting: 3
I0126 12:10:51.272306       8 log.go:172] (0xc000b0b600) (0xc0022d0000) Stream removed, broadcasting: 5
Jan 26 12:10:51.272: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:10:51.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-wwg8l" for this suite.
Jan 26 12:11:15.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:11:15.517: INFO: namespace: e2e-tests-pod-network-test-wwg8l, resource: bindings, ignored listing per whitelist
Jan 26 12:11:15.573: INFO: namespace e2e-tests-pod-network-test-wwg8l deletion completed in 24.271891038s

• [SLOW TEST:61.429 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:11:15.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 26 12:11:16.340: INFO: Waiting up to 5m0s for pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg" in namespace "e2e-tests-svcaccounts-n85mv" to be "success or failure"
Jan 26 12:11:16.359: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Pending", Reason="", readiness=false. Elapsed: 19.192392ms
Jan 26 12:11:18.734: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394446723s
Jan 26 12:11:20.764: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424209852s
Jan 26 12:11:23.075: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.735577025s
Jan 26 12:11:25.098: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758744218s
Jan 26 12:11:27.109: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.769454735s
Jan 26 12:11:29.552: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Pending", Reason="", readiness=false. Elapsed: 13.211919759s
Jan 26 12:11:31.567: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Pending", Reason="", readiness=false. Elapsed: 15.226773526s
Jan 26 12:11:33.580: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.240012797s
STEP: Saw pod success
Jan 26 12:11:33.580: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg" satisfied condition "success or failure"
Jan 26 12:11:33.585: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg container token-test: 
STEP: delete the pod
Jan 26 12:11:35.395: INFO: Waiting for pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg to disappear
Jan 26 12:11:35.963: INFO: Pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pm8gg no longer exists
STEP: Creating a pod to test consume service account root CA
Jan 26 12:11:35.999: INFO: Waiting up to 5m0s for pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj" in namespace "e2e-tests-svcaccounts-n85mv" to be "success or failure"
Jan 26 12:11:36.185: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 185.607162ms
Jan 26 12:11:38.219: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219262576s
Jan 26 12:11:40.241: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.241680213s
Jan 26 12:11:42.368: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.368208003s
Jan 26 12:11:44.382: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.382699932s
Jan 26 12:11:46.437: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.43706984s
Jan 26 12:11:48.493: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.492831975s
Jan 26 12:11:50.525: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.525542983s
Jan 26 12:11:52.870: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.869997474s
STEP: Saw pod success
Jan 26 12:11:52.870: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj" satisfied condition "success or failure"
Jan 26 12:11:52.877: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj container root-ca-test: 
STEP: delete the pod
Jan 26 12:11:53.168: INFO: Waiting for pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj to disappear
Jan 26 12:11:53.192: INFO: Pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-mw9dj no longer exists
STEP: Creating a pod to test consume service account namespace
Jan 26 12:11:53.215: INFO: Waiting up to 5m0s for pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk" in namespace "e2e-tests-svcaccounts-n85mv" to be "success or failure"
Jan 26 12:11:53.315: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Pending", Reason="", readiness=false. Elapsed: 99.595883ms
Jan 26 12:11:55.334: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118478269s
Jan 26 12:11:57.346: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130206829s
Jan 26 12:12:00.263: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Pending", Reason="", readiness=false. Elapsed: 7.047241902s
Jan 26 12:12:02.276: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Pending", Reason="", readiness=false. Elapsed: 9.059898318s
Jan 26 12:12:04.503: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.28736108s
Jan 26 12:12:06.524: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.308889324s
Jan 26 12:12:08.548: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.332396847s
Jan 26 12:12:10.611: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.395364279s
STEP: Saw pod success
Jan 26 12:12:10.611: INFO: Pod "pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk" satisfied condition "success or failure"
Jan 26 12:12:10.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk container namespace-test: 
STEP: delete the pod
Jan 26 12:12:11.025: INFO: Waiting for pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk to disappear
Jan 26 12:12:11.060: INFO: Pod pod-service-account-f3ee9377-4034-11ea-b664-0242ac110005-pz4fk no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:12:11.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-n85mv" for this suite.
Jan 26 12:12:19.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:12:19.408: INFO: namespace: e2e-tests-svcaccounts-n85mv, resource: bindings, ignored listing per whitelist
Jan 26 12:12:19.474: INFO: namespace e2e-tests-svcaccounts-n85mv deletion completed in 8.266122636s

• [SLOW TEST:63.901 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:12:19.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:12:29.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-g5n67" for this suite.
Jan 26 12:13:15.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:13:15.997: INFO: namespace: e2e-tests-kubelet-test-g5n67, resource: bindings, ignored listing per whitelist
Jan 26 12:13:16.025: INFO: namespace e2e-tests-kubelet-test-g5n67 deletion completed in 46.19207155s

• [SLOW TEST:56.550 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:13:16.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 12:13:16.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-pmbt9'
Jan 26 12:13:18.029: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 12:13:18.029: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 26 12:13:22.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-pmbt9'
Jan 26 12:13:22.353: INFO: stderr: ""
Jan 26 12:13:22.353: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:13:22.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pmbt9" for this suite.
Jan 26 12:13:46.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:13:46.664: INFO: namespace: e2e-tests-kubectl-pmbt9, resource: bindings, ignored listing per whitelist
Jan 26 12:13:46.703: INFO: namespace e2e-tests-kubectl-pmbt9 deletion completed in 24.338808911s

• [SLOW TEST:30.678 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:13:46.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-4da56fd9-4035-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 12:13:46.919: INFO: Waiting up to 5m0s for pod "pod-secrets-4dac2573-4035-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-2nxb6" to be "success or failure"
Jan 26 12:13:46.953: INFO: Pod "pod-secrets-4dac2573-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.263917ms
Jan 26 12:13:49.130: INFO: Pod "pod-secrets-4dac2573-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210794477s
Jan 26 12:13:51.142: INFO: Pod "pod-secrets-4dac2573-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222138359s
Jan 26 12:13:53.159: INFO: Pod "pod-secrets-4dac2573-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239140994s
Jan 26 12:13:55.176: INFO: Pod "pod-secrets-4dac2573-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256903663s
Jan 26 12:13:57.884: INFO: Pod "pod-secrets-4dac2573-4035-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.964787522s
STEP: Saw pod success
Jan 26 12:13:57.885: INFO: Pod "pod-secrets-4dac2573-4035-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:13:57.903: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4dac2573-4035-11ea-b664-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 26 12:13:58.117: INFO: Waiting for pod pod-secrets-4dac2573-4035-11ea-b664-0242ac110005 to disappear
Jan 26 12:13:58.215: INFO: Pod pod-secrets-4dac2573-4035-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:13:58.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2nxb6" for this suite.
Jan 26 12:14:04.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:14:04.337: INFO: namespace: e2e-tests-secrets-2nxb6, resource: bindings, ignored listing per whitelist
Jan 26 12:14:04.869: INFO: namespace e2e-tests-secrets-2nxb6 deletion completed in 6.645401666s

• [SLOW TEST:18.166 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:14:04.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 12:14:05.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-f8r9x'
Jan 26 12:14:05.303: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 12:14:05.303: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 26 12:14:07.342: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-d2lkm]
Jan 26 12:14:07.342: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-d2lkm" in namespace "e2e-tests-kubectl-f8r9x" to be "running and ready"
Jan 26 12:14:07.347: INFO: Pod "e2e-test-nginx-rc-d2lkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.691345ms
Jan 26 12:14:09.382: INFO: Pod "e2e-test-nginx-rc-d2lkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039874621s
Jan 26 12:14:11.389: INFO: Pod "e2e-test-nginx-rc-d2lkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047315247s
Jan 26 12:14:13.450: INFO: Pod "e2e-test-nginx-rc-d2lkm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107936849s
Jan 26 12:14:15.465: INFO: Pod "e2e-test-nginx-rc-d2lkm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123372396s
Jan 26 12:14:17.480: INFO: Pod "e2e-test-nginx-rc-d2lkm": Phase="Running", Reason="", readiness=true. Elapsed: 10.138386362s
Jan 26 12:14:17.480: INFO: Pod "e2e-test-nginx-rc-d2lkm" satisfied condition "running and ready"
Jan 26 12:14:17.481: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-d2lkm]
Jan 26 12:14:17.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-f8r9x'
Jan 26 12:14:17.828: INFO: stderr: ""
Jan 26 12:14:17.828: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan 26 12:14:17.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-f8r9x'
Jan 26 12:14:17.986: INFO: stderr: ""
Jan 26 12:14:17.986: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:14:17.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f8r9x" for this suite.
Jan 26 12:14:42.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:14:42.171: INFO: namespace: e2e-tests-kubectl-f8r9x, resource: bindings, ignored listing per whitelist
Jan 26 12:14:42.328: INFO: namespace e2e-tests-kubectl-f8r9x deletion completed in 24.31672148s

• [SLOW TEST:37.459 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:14:42.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-qc6rs
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qc6rs to expose endpoints map[]
Jan 26 12:14:43.005: INFO: Get endpoints failed (13.418547ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 26 12:14:44.022: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qc6rs exposes endpoints map[] (1.030190661s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-qc6rs
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qc6rs to expose endpoints map[pod1:[100]]
Jan 26 12:14:49.655: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.612157722s elapsed, will retry)
Jan 26 12:14:53.904: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qc6rs exposes endpoints map[pod1:[100]] (9.861839857s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-qc6rs
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qc6rs to expose endpoints map[pod1:[100] pod2:[101]]
Jan 26 12:14:59.796: INFO: Unexpected endpoints: found map[6fbd7eb4-4035-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.760939986s elapsed, will retry)
Jan 26 12:15:04.330: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qc6rs exposes endpoints map[pod1:[100] pod2:[101]] (10.294935075s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-qc6rs
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qc6rs to expose endpoints map[pod2:[101]]
Jan 26 12:15:04.387: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qc6rs exposes endpoints map[pod2:[101]] (26.732423ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-qc6rs
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qc6rs to expose endpoints map[]
Jan 26 12:15:05.825: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qc6rs exposes endpoints map[] (1.421865729s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:15:06.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-qc6rs" for this suite.
Jan 26 12:15:31.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:15:31.392: INFO: namespace: e2e-tests-services-qc6rs, resource: bindings, ignored listing per whitelist
Jan 26 12:15:31.398: INFO: namespace e2e-tests-services-qc6rs deletion completed in 24.396770063s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:49.070 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:15:31.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-x7jz2/secret-test-8c326a2e-4035-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 12:15:31.858: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-x7jz2" to be "success or failure"
Jan 26 12:15:31.874: INFO: Pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.267547ms
Jan 26 12:15:33.894: INFO: Pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03587702s
Jan 26 12:15:35.907: INFO: Pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049642801s
Jan 26 12:15:38.311: INFO: Pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453517388s
Jan 26 12:15:40.328: INFO: Pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470155267s
Jan 26 12:15:42.470: INFO: Pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.611760554s
Jan 26 12:15:44.535: INFO: Pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.677617683s
STEP: Saw pod success
Jan 26 12:15:44.535: INFO: Pod "pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:15:44.550: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005 container env-test: 
STEP: delete the pod
Jan 26 12:15:44.849: INFO: Waiting for pod pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005 to disappear
Jan 26 12:15:44.862: INFO: Pod pod-configmaps-8c33ac07-4035-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:15:44.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-x7jz2" for this suite.
Jan 26 12:15:51.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:15:51.193: INFO: namespace: e2e-tests-secrets-x7jz2, resource: bindings, ignored listing per whitelist
Jan 26 12:15:51.224: INFO: namespace e2e-tests-secrets-x7jz2 deletion completed in 6.345932559s

• [SLOW TEST:19.825 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:15:51.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 12:15:51.652: INFO: Creating ReplicaSet my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005
Jan 26 12:15:51.701: INFO: Pod name my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005: Found 0 pods out of 1
Jan 26 12:15:56.712: INFO: Pod name my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005: Found 1 pods out of 1
Jan 26 12:15:56.712: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005" is running
Jan 26 12:16:02.728: INFO: Pod "my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005-8ft5j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 12:15:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 12:15:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 12:15:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 12:15:51 +0000 UTC Reason: Message:}])
Jan 26 12:16:02.728: INFO: Trying to dial the pod
Jan 26 12:16:07.788: INFO: Controller my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005: Got expected result from replica 1 [my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005-8ft5j]: "my-hostname-basic-980b0e11-4035-11ea-b664-0242ac110005-8ft5j", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:16:07.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-qghtb" for this suite.
Jan 26 12:16:13.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:16:13.963: INFO: namespace: e2e-tests-replicaset-qghtb, resource: bindings, ignored listing per whitelist
Jan 26 12:16:14.021: INFO: namespace e2e-tests-replicaset-qghtb deletion completed in 6.216905826s

• [SLOW TEST:22.796 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:16:14.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 26 12:16:24.399: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-a581cda0-4035-11ea-b664-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-mmch5", SelfLink:"/api/v1/namespaces/e2e-tests-pods-mmch5/pods/pod-submit-remove-a581cda0-4035-11ea-b664-0242ac110005", UID:"a58660de-4035-11ea-a994-fa163e34d433", ResourceVersion:"19521449", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715637774, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"241332897"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5c7vt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0023ac380), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5c7vt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002559f78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021eff20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002559fb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002559fd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002559fd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002559fdc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715637774, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715637783, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715637783, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715637774, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00251d0c0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00251d0e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://514da18b8d8ab035f2e08233e505d199ccb65940fd3c53822186a84cb0e2a159"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:16:31.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mmch5" for this suite.
Jan 26 12:16:37.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:16:37.395: INFO: namespace: e2e-tests-pods-mmch5, resource: bindings, ignored listing per whitelist
Jan 26 12:16:37.467: INFO: namespace e2e-tests-pods-mmch5 deletion completed in 6.210126323s

• [SLOW TEST:23.445 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:16:37.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 26 12:16:37.604: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:16:37.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-frlvk" for this suite.
Jan 26 12:16:43.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:16:44.001: INFO: namespace: e2e-tests-kubectl-frlvk, resource: bindings, ignored listing per whitelist
Jan 26 12:16:44.290: INFO: namespace e2e-tests-kubectl-frlvk deletion completed in 6.519168777s

• [SLOW TEST:6.823 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:16:44.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 12:16:44.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-krvq5" to be "success or failure"
Jan 26 12:16:44.555: INFO: Pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 87.694238ms
Jan 26 12:16:46.645: INFO: Pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177700694s
Jan 26 12:16:48.660: INFO: Pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19199852s
Jan 26 12:16:51.441: INFO: Pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.973426418s
Jan 26 12:16:53.455: INFO: Pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.987244771s
Jan 26 12:16:55.473: INFO: Pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.00520727s
Jan 26 12:16:57.702: INFO: Pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.234187362s
STEP: Saw pod success
Jan 26 12:16:57.702: INFO: Pod "downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:16:57.713: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 12:16:58.008: INFO: Waiting for pod downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005 to disappear
Jan 26 12:16:58.084: INFO: Pod downwardapi-volume-b780b0a5-4035-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:16:58.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-krvq5" for this suite.
Jan 26 12:17:04.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:17:04.234: INFO: namespace: e2e-tests-projected-krvq5, resource: bindings, ignored listing per whitelist
Jan 26 12:17:04.377: INFO: namespace e2e-tests-projected-krvq5 deletion completed in 6.274609346s

• [SLOW TEST:20.087 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:17:04.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 26 12:17:04.673: INFO: Waiting up to 5m0s for pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-gkhg8" to be "success or failure"
Jan 26 12:17:04.722: INFO: Pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.462544ms
Jan 26 12:17:06.730: INFO: Pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056629362s
Jan 26 12:17:08.756: INFO: Pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083083178s
Jan 26 12:17:10.782: INFO: Pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108773321s
Jan 26 12:17:12.798: INFO: Pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124777168s
Jan 26 12:17:15.203: INFO: Pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.530203505s
Jan 26 12:17:17.220: INFO: Pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.546970689s
STEP: Saw pod success
Jan 26 12:17:17.220: INFO: Pod "downward-api-c38f3f26-4035-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:17:17.226: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c38f3f26-4035-11ea-b664-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 26 12:17:17.321: INFO: Waiting for pod downward-api-c38f3f26-4035-11ea-b664-0242ac110005 to disappear
Jan 26 12:17:17.381: INFO: Pod downward-api-c38f3f26-4035-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:17:17.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gkhg8" for this suite.
Jan 26 12:17:23.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:17:23.511: INFO: namespace: e2e-tests-downward-api-gkhg8, resource: bindings, ignored listing per whitelist
Jan 26 12:17:23.741: INFO: namespace e2e-tests-downward-api-gkhg8 deletion completed in 6.342110334s

• [SLOW TEST:19.365 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:17:23.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 26 12:17:24.159: INFO: Waiting up to 5m0s for pod "pod-cf2c0639-4035-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-fbnf4" to be "success or failure"
Jan 26 12:17:24.168: INFO: Pod "pod-cf2c0639-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.934248ms
Jan 26 12:17:26.200: INFO: Pod "pod-cf2c0639-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041027952s
Jan 26 12:17:28.211: INFO: Pod "pod-cf2c0639-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052084589s
Jan 26 12:17:30.565: INFO: Pod "pod-cf2c0639-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40595221s
Jan 26 12:17:32.635: INFO: Pod "pod-cf2c0639-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476168793s
Jan 26 12:17:34.656: INFO: Pod "pod-cf2c0639-4035-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.496996829s
STEP: Saw pod success
Jan 26 12:17:34.656: INFO: Pod "pod-cf2c0639-4035-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:17:34.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-cf2c0639-4035-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 12:17:34.928: INFO: Waiting for pod pod-cf2c0639-4035-11ea-b664-0242ac110005 to disappear
Jan 26 12:17:34.969: INFO: Pod pod-cf2c0639-4035-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:17:34.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fbnf4" for this suite.
Jan 26 12:17:41.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:17:41.244: INFO: namespace: e2e-tests-emptydir-fbnf4, resource: bindings, ignored listing per whitelist
Jan 26 12:17:41.425: INFO: namespace e2e-tests-emptydir-fbnf4 deletion completed in 6.443412868s

• [SLOW TEST:17.683 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:17:41.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 12:17:41.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7c68c'
Jan 26 12:17:41.801: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 12:17:41.801: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan 26 12:17:43.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-7c68c'
Jan 26 12:17:44.119: INFO: stderr: ""
Jan 26 12:17:44.119: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:17:44.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7c68c" for this suite.
Jan 26 12:17:50.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:17:50.649: INFO: namespace: e2e-tests-kubectl-7c68c, resource: bindings, ignored listing per whitelist
Jan 26 12:17:50.663: INFO: namespace e2e-tests-kubectl-7c68c deletion completed in 6.423651469s

• [SLOW TEST:9.237 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:17:50.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 26 12:17:50.882: INFO: Waiting up to 5m0s for pod "downward-api-df15b52b-4035-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-wlncv" to be "success or failure"
Jan 26 12:17:50.905: INFO: Pod "downward-api-df15b52b-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.572063ms
Jan 26 12:17:52.918: INFO: Pod "downward-api-df15b52b-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036055277s
Jan 26 12:17:54.937: INFO: Pod "downward-api-df15b52b-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054996038s
Jan 26 12:17:56.949: INFO: Pod "downward-api-df15b52b-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067040918s
Jan 26 12:17:59.072: INFO: Pod "downward-api-df15b52b-4035-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190483448s
Jan 26 12:18:01.107: INFO: Pod "downward-api-df15b52b-4035-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224548634s
STEP: Saw pod success
Jan 26 12:18:01.107: INFO: Pod "downward-api-df15b52b-4035-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:18:01.688: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-df15b52b-4035-11ea-b664-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 26 12:18:01.975: INFO: Waiting for pod downward-api-df15b52b-4035-11ea-b664-0242ac110005 to disappear
Jan 26 12:18:01.983: INFO: Pod downward-api-df15b52b-4035-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:18:01.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wlncv" for this suite.
Jan 26 12:18:10.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:18:10.179: INFO: namespace: e2e-tests-downward-api-wlncv, resource: bindings, ignored listing per whitelist
Jan 26 12:18:10.260: INFO: namespace e2e-tests-downward-api-wlncv deletion completed in 8.265465972s

• [SLOW TEST:19.596 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:18:10.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 26 12:21:13.465: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:13.544: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:15.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:15.563: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:17.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:17.555: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:19.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:19.563: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:21.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:21.563: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:23.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:23.569: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:25.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:25.559: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:27.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:27.560: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:29.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:29.565: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:31.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:31.568: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:33.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:33.561: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:35.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:35.562: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:37.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:37.563: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:39.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:39.564: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:41.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:41.559: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:43.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:43.647: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:45.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:45.562: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:47.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:47.564: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:49.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:49.563: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:51.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:51.563: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:53.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:53.571: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:55.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:55.561: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:57.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:57.595: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:21:59.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:21:59.564: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:01.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:01.558: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:03.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:03.567: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:05.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:05.561: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:07.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:07.562: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:09.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:09.561: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:11.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:11.563: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:13.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:13.971: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:15.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:15.556: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:17.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:17.785: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:19.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:19.558: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:21.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:21.560: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:23.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:23.561: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:25.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:25.554: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:27.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:27.600: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:29.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:29.588: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:31.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:31.557: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:33.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:33.573: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:35.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:35.559: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:37.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:37.572: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:39.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:39.565: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:41.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:41.555: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:43.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:43.564: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:45.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:45.559: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:47.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:47.559: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:49.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:49.564: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:51.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:51.562: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 12:22:53.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 12:22:53.562: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:22:53.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gfprc" for this suite.
Jan 26 12:23:17.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:23:17.741: INFO: namespace: e2e-tests-container-lifecycle-hook-gfprc, resource: bindings, ignored listing per whitelist
Jan 26 12:23:17.841: INFO: namespace e2e-tests-container-lifecycle-hook-gfprc deletion completed in 24.26924482s

• [SLOW TEST:307.581 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:23:17.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 26 12:23:38.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:38.254: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:40.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:40.272: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:42.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:42.267: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:44.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:44.269: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:46.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:46.272: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:48.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:48.298: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:50.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:50.283: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:52.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:52.267: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:54.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:54.302: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:56.255: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:56.333: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:23:58.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:23:58.271: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:24:00.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:24:00.268: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:24:02.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:24:02.270: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 12:24:04.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 12:24:04.269: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:24:04.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-x6spw" for this suite.
Jan 26 12:24:28.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:24:28.596: INFO: namespace: e2e-tests-container-lifecycle-hook-x6spw, resource: bindings, ignored listing per whitelist
Jan 26 12:24:28.633: INFO: namespace e2e-tests-container-lifecycle-hook-x6spw deletion completed in 24.323810742s

• [SLOW TEST:70.791 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:24:28.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 12:24:28.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7bsqb'
Jan 26 12:24:30.905: INFO: stderr: ""
Jan 26 12:24:30.905: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 26 12:24:40.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7bsqb -o json'
Jan 26 12:24:41.151: INFO: stderr: ""
Jan 26 12:24:41.151: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-26T12:24:30Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-7bsqb\",\n        \"resourceVersion\": \"19522285\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-7bsqb/pods/e2e-test-nginx-pod\",\n        \"uid\": \"cd828222-4036-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-4l2kw\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-4l2kw\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-4l2kw\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-26T12:24:31Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-26T12:24:39Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-26T12:24:39Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-26T12:24:30Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://ab66cbd880064a83d0c66a61fbd11709a7bb69118fc8027f96216350fc1747a6\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-26T12:24:39Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-26T12:24:31Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 26 12:24:41.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-7bsqb'
Jan 26 12:24:41.520: INFO: stderr: ""
Jan 26 12:24:41.521: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 26 12:24:41.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7bsqb'
Jan 26 12:24:49.874: INFO: stderr: ""
Jan 26 12:24:49.874: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:24:49.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7bsqb" for this suite.
Jan 26 12:24:56.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:24:56.414: INFO: namespace: e2e-tests-kubectl-7bsqb, resource: bindings, ignored listing per whitelist
Jan 26 12:24:56.561: INFO: namespace e2e-tests-kubectl-7bsqb deletion completed in 6.658698525s

• [SLOW TEST:27.928 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:24:56.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 26 12:24:56.889: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix752509802/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:24:56.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6hv8s" for this suite.
Jan 26 12:25:03.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:25:03.101: INFO: namespace: e2e-tests-kubectl-6hv8s, resource: bindings, ignored listing per whitelist
Jan 26 12:25:03.181: INFO: namespace e2e-tests-kubectl-6hv8s deletion completed in 6.181851583s

• [SLOW TEST:6.619 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:25:03.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-e0e4abed-4036-11ea-b664-0242ac110005
STEP: Creating secret with name s-test-opt-upd-e0e4ac80-4036-11ea-b664-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-e0e4abed-4036-11ea-b664-0242ac110005
STEP: Updating secret s-test-opt-upd-e0e4ac80-4036-11ea-b664-0242ac110005
STEP: Creating secret with name s-test-opt-create-e0e4ac9c-4036-11ea-b664-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:25:21.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5dggk" for this suite.
Jan 26 12:25:46.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:25:46.202: INFO: namespace: e2e-tests-projected-5dggk, resource: bindings, ignored listing per whitelist
Jan 26 12:25:46.251: INFO: namespace e2e-tests-projected-5dggk deletion completed in 24.259205165s

• [SLOW TEST:43.070 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:25:46.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 12:25:46.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-f4dtf" to be "success or failure"
Jan 26 12:25:46.591: INFO: Pod "downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.67119ms
Jan 26 12:25:48.607: INFO: Pod "downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090310486s
Jan 26 12:25:50.632: INFO: Pod "downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115162969s
Jan 26 12:25:52.670: INFO: Pod "downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153377051s
Jan 26 12:25:54.708: INFO: Pod "downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.192002345s
Jan 26 12:25:57.517: INFO: Pod "downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.00082906s
STEP: Saw pod success
Jan 26 12:25:57.517: INFO: Pod "downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:25:57.543: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 12:25:57.896: INFO: Waiting for pod downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005 to disappear
Jan 26 12:25:57.920: INFO: Pod downwardapi-volume-fa894b2b-4036-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:25:57.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f4dtf" for this suite.
Jan 26 12:26:04.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:26:04.139: INFO: namespace: e2e-tests-projected-f4dtf, resource: bindings, ignored listing per whitelist
Jan 26 12:26:04.168: INFO: namespace e2e-tests-projected-f4dtf deletion completed in 6.231055806s

• [SLOW TEST:17.916 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:26:04.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:26:15.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-nj6vs" for this suite.
Jan 26 12:26:39.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:26:39.692: INFO: namespace: e2e-tests-replication-controller-nj6vs, resource: bindings, ignored listing per whitelist
Jan 26 12:26:39.789: INFO: namespace e2e-tests-replication-controller-nj6vs deletion completed in 24.24273154s

• [SLOW TEST:35.621 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:26:39.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005
Jan 26 12:26:40.142: INFO: Pod name my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005: Found 0 pods out of 1
Jan 26 12:26:45.632: INFO: Pod name my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005: Found 1 pods out of 1
Jan 26 12:26:45.632: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005" are running
Jan 26 12:26:49.678: INFO: Pod "my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005-mgggl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 12:26:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 12:26:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 12:26:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 12:26:40 +0000 UTC Reason: Message:}])
Jan 26 12:26:49.678: INFO: Trying to dial the pod
Jan 26 12:26:54.721: INFO: Controller my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005: Got expected result from replica 1 [my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005-mgggl]: "my-hostname-basic-1a8fec31-4037-11ea-b664-0242ac110005-mgggl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:26:54.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-vhh7t" for this suite.
Jan 26 12:27:02.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:27:02.835: INFO: namespace: e2e-tests-replication-controller-vhh7t, resource: bindings, ignored listing per whitelist
Jan 26 12:27:02.884: INFO: namespace e2e-tests-replication-controller-vhh7t deletion completed in 8.154824267s

• [SLOW TEST:23.094 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:27:02.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-gbvs
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 12:27:04.018: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gbvs" in namespace "e2e-tests-subpath-s2vgf" to be "success or failure"
Jan 26 12:27:04.034: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Pending", Reason="", readiness=false. Elapsed: 15.806857ms
Jan 26 12:27:06.471: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452592174s
Jan 26 12:27:08.499: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481026881s
Jan 26 12:27:10.703: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.684335203s
Jan 26 12:27:12.710: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.692084954s
Jan 26 12:27:14.724: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.705383616s
Jan 26 12:27:16.736: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.717854734s
Jan 26 12:27:18.769: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.750329266s
Jan 26 12:27:20.784: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 16.766025215s
Jan 26 12:27:22.795: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 18.777136285s
Jan 26 12:27:24.950: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 20.932186298s
Jan 26 12:27:26.975: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 22.956470055s
Jan 26 12:27:28.986: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 24.96769527s
Jan 26 12:27:31.029: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 27.010646674s
Jan 26 12:27:33.104: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 29.085751682s
Jan 26 12:27:35.130: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 31.1118088s
Jan 26 12:27:37.169: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Running", Reason="", readiness=false. Elapsed: 33.151066614s
Jan 26 12:27:39.187: INFO: Pod "pod-subpath-test-downwardapi-gbvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.168333312s
STEP: Saw pod success
Jan 26 12:27:39.187: INFO: Pod "pod-subpath-test-downwardapi-gbvs" satisfied condition "success or failure"
Jan 26 12:27:39.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-gbvs container test-container-subpath-downwardapi-gbvs: 
STEP: delete the pod
Jan 26 12:27:39.272: INFO: Waiting for pod pod-subpath-test-downwardapi-gbvs to disappear
Jan 26 12:27:40.421: INFO: Pod pod-subpath-test-downwardapi-gbvs no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-gbvs
Jan 26 12:27:40.421: INFO: Deleting pod "pod-subpath-test-downwardapi-gbvs" in namespace "e2e-tests-subpath-s2vgf"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:27:40.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-s2vgf" for this suite.
Jan 26 12:27:46.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:27:46.949: INFO: namespace: e2e-tests-subpath-s2vgf, resource: bindings, ignored listing per whitelist
Jan 26 12:27:46.997: INFO: namespace e2e-tests-subpath-s2vgf deletion completed in 6.510180254s

• [SLOW TEST:44.112 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:27:46.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 12:27:47.235: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:27:48.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-vkk7g" for this suite.
Jan 26 12:27:54.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:27:54.776: INFO: namespace: e2e-tests-custom-resource-definition-vkk7g, resource: bindings, ignored listing per whitelist
Jan 26 12:27:54.806: INFO: namespace e2e-tests-custom-resource-definition-vkk7g deletion completed in 6.241610829s

• [SLOW TEST:7.809 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:27:54.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 26 12:27:55.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 26 12:27:55.343: INFO: stderr: ""
Jan 26 12:27:55.343: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:27:55.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4mrp4" for this suite.
Jan 26 12:28:01.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:28:01.459: INFO: namespace: e2e-tests-kubectl-4mrp4, resource: bindings, ignored listing per whitelist
Jan 26 12:28:01.616: INFO: namespace e2e-tests-kubectl-4mrp4 deletion completed in 6.243148905s

• [SLOW TEST:6.810 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:28:01.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:28:10.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-2mpb5" for this suite.
Jan 26 12:28:52.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:28:52.450: INFO: namespace: e2e-tests-kubelet-test-2mpb5, resource: bindings, ignored listing per whitelist
Jan 26 12:28:52.456: INFO: namespace e2e-tests-kubelet-test-2mpb5 deletion completed in 42.294783289s

• [SLOW TEST:50.839 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:28:52.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:29:04.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-q9ssf" for this suite.
Jan 26 12:29:11.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:29:11.137: INFO: namespace: e2e-tests-kubelet-test-q9ssf, resource: bindings, ignored listing per whitelist
Jan 26 12:29:11.175: INFO: namespace e2e-tests-kubelet-test-q9ssf deletion completed in 6.218388682s

• [SLOW TEST:18.719 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:29:11.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-74b64f5f-4037-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 12:29:11.394: INFO: Waiting up to 5m0s for pod "pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-bgzmz" to be "success or failure"
Jan 26 12:29:11.406: INFO: Pod "pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.757143ms
Jan 26 12:29:13.418: INFO: Pod "pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023969275s
Jan 26 12:29:15.436: INFO: Pod "pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041746137s
Jan 26 12:29:17.522: INFO: Pod "pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127291822s
Jan 26 12:29:19.533: INFO: Pod "pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138899495s
Jan 26 12:29:21.573: INFO: Pod "pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178274726s
STEP: Saw pod success
Jan 26 12:29:21.573: INFO: Pod "pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:29:21.577: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 26 12:29:21.727: INFO: Waiting for pod pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005 to disappear
Jan 26 12:29:21.810: INFO: Pod pod-secrets-74b7e9a2-4037-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:29:21.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bgzmz" for this suite.
Jan 26 12:29:27.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:29:27.995: INFO: namespace: e2e-tests-secrets-bgzmz, resource: bindings, ignored listing per whitelist
Jan 26 12:29:28.060: INFO: namespace e2e-tests-secrets-bgzmz deletion completed in 6.174827293s

• [SLOW TEST:16.884 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:29:28.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-7ec89966-4037-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 12:29:28.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-gwm7s" to be "success or failure"
Jan 26 12:29:28.331: INFO: Pod "pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.813713ms
Jan 26 12:29:30.627: INFO: Pod "pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30804075s
Jan 26 12:29:32.638: INFO: Pod "pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318398828s
Jan 26 12:29:35.113: INFO: Pod "pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.793652731s
Jan 26 12:29:37.121: INFO: Pod "pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.801406306s
Jan 26 12:29:39.176: INFO: Pod "pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.856999515s
STEP: Saw pod success
Jan 26 12:29:39.176: INFO: Pod "pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:29:39.185: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 26 12:29:39.879: INFO: Waiting for pod pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005 to disappear
Jan 26 12:29:40.266: INFO: Pod pod-configmaps-7ecaeb84-4037-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:29:40.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gwm7s" for this suite.
Jan 26 12:29:46.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:29:46.562: INFO: namespace: e2e-tests-configmap-gwm7s, resource: bindings, ignored listing per whitelist
Jan 26 12:29:46.573: INFO: namespace e2e-tests-configmap-gwm7s deletion completed in 6.290841907s

• [SLOW TEST:18.513 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:29:46.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 12:29:46.825: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 26 12:29:46.868: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 26 12:29:51.881: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 26 12:29:57.899: INFO: Creating deployment "test-rolling-update-deployment"
Jan 26 12:29:57.919: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 26 12:29:57.948: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 26 12:29:59.995: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 26 12:30:00.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638597, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 12:30:02.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638597, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 12:30:04.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638597, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 12:30:06.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638597, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 12:30:08.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715638597, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 12:30:10.266: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 26 12:30:10.678: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-56s5m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-56s5m/deployments/test-rolling-update-deployment,UID:9072748a-4037-11ea-a994-fa163e34d433,ResourceVersion:19523049,Generation:1,CreationTimestamp:2020-01-26 12:29:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-26 12:29:58 +0000 UTC 2020-01-26 12:29:58 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-26 12:30:08 +0000 UTC 2020-01-26 12:29:57 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 26 12:30:10.695: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-56s5m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-56s5m/replicasets/test-rolling-update-deployment-75db98fb4c,UID:907db1fd-4037-11ea-a994-fa163e34d433,ResourceVersion:19523039,Generation:1,CreationTimestamp:2020-01-26 12:29:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9072748a-4037-11ea-a994-fa163e34d433 0xc0027c9cc7 0xc0027c9cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 26 12:30:10.695: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 26 12:30:10.695: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-56s5m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-56s5m/replicasets/test-rolling-update-controller,UID:89d86679-4037-11ea-a994-fa163e34d433,ResourceVersion:19523047,Generation:2,CreationTimestamp:2020-01-26 12:29:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9072748a-4037-11ea-a994-fa163e34d433 0xc0027c9bef 0xc0027c9c00}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 26 12:30:10.701: INFO: Pod "test-rolling-update-deployment-75db98fb4c-752nv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-752nv,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-56s5m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-56s5m/pods/test-rolling-update-deployment-75db98fb4c-752nv,UID:90973bcf-4037-11ea-a994-fa163e34d433,ResourceVersion:19523038,Generation:0,CreationTimestamp:2020-01-26 12:29:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 907db1fd-4037-11ea-a994-fa163e34d433 0xc00268b587 0xc00268b588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gkg8b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gkg8b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-gkg8b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00268b620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00268b640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:29:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:30:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:30:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:29:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-26 12:29:58 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-26 12:30:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://c6abd32b9a6c8e9938a7609804feb004d6babdbf647ce5649b680b1414946a54}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:30:10.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-56s5m" for this suite.
Jan 26 12:30:18.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:30:18.929: INFO: namespace: e2e-tests-deployment-56s5m, resource: bindings, ignored listing per whitelist
Jan 26 12:30:19.007: INFO: namespace e2e-tests-deployment-56s5m deletion completed in 8.295417351s

• [SLOW TEST:32.434 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:30:19.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 26 12:30:20.439: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:30:37.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-knxd4" for this suite.
Jan 26 12:30:44.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:30:44.146: INFO: namespace: e2e-tests-init-container-knxd4, resource: bindings, ignored listing per whitelist
Jan 26 12:30:44.161: INFO: namespace e2e-tests-init-container-knxd4 deletion completed in 6.18624704s

• [SLOW TEST:25.153 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:30:44.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 26 12:30:44.377: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-sf74r" to be "success or failure"
Jan 26 12:30:44.400: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.907844ms
Jan 26 12:30:46.409: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032052638s
Jan 26 12:30:48.417: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040362237s
Jan 26 12:30:50.467: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090413461s
Jan 26 12:30:52.534: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157256944s
Jan 26 12:30:54.569: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19185387s
Jan 26 12:30:56.618: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.241384033s
STEP: Saw pod success
Jan 26 12:30:56.619: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 26 12:30:56.698: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 26 12:30:56.876: INFO: Waiting for pod pod-host-path-test to disappear
Jan 26 12:30:56.886: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:30:56.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-sf74r" for this suite.
Jan 26 12:31:03.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:31:03.124: INFO: namespace: e2e-tests-hostpath-sf74r, resource: bindings, ignored listing per whitelist
Jan 26 12:31:03.202: INFO: namespace e2e-tests-hostpath-sf74r deletion completed in 6.221881075s

• [SLOW TEST:19.041 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:31:03.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-x9g6f
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-x9g6f
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-x9g6f
Jan 26 12:31:03.565: INFO: Found 0 stateful pods, waiting for 1
Jan 26 12:31:13.581: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 26 12:31:13.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 12:31:14.133: INFO: stderr: "I0126 12:31:13.822299    2600 log.go:172] (0xc000138840) (0xc0007b2640) Create stream\nI0126 12:31:13.822434    2600 log.go:172] (0xc000138840) (0xc0007b2640) Stream added, broadcasting: 1\nI0126 12:31:13.827128    2600 log.go:172] (0xc000138840) Reply frame received for 1\nI0126 12:31:13.827150    2600 log.go:172] (0xc000138840) (0xc0007b26e0) Create stream\nI0126 12:31:13.827158    2600 log.go:172] (0xc000138840) (0xc0007b26e0) Stream added, broadcasting: 3\nI0126 12:31:13.828210    2600 log.go:172] (0xc000138840) Reply frame received for 3\nI0126 12:31:13.828232    2600 log.go:172] (0xc000138840) (0xc000660be0) Create stream\nI0126 12:31:13.828245    2600 log.go:172] (0xc000138840) (0xc000660be0) Stream added, broadcasting: 5\nI0126 12:31:13.829098    2600 log.go:172] (0xc000138840) Reply frame received for 5\nI0126 12:31:14.034322    2600 log.go:172] (0xc000138840) Data frame received for 3\nI0126 12:31:14.034345    2600 log.go:172] (0xc0007b26e0) (3) Data frame handling\nI0126 12:31:14.034353    2600 log.go:172] (0xc0007b26e0) (3) Data frame sent\nI0126 12:31:14.127473    2600 log.go:172] (0xc000138840) (0xc0007b26e0) Stream removed, broadcasting: 3\nI0126 12:31:14.127588    2600 log.go:172] (0xc000138840) Data frame received for 1\nI0126 12:31:14.127612    2600 log.go:172] (0xc0007b2640) (1) Data frame handling\nI0126 12:31:14.127627    2600 log.go:172] (0xc0007b2640) (1) Data frame sent\nI0126 12:31:14.127640    2600 log.go:172] (0xc000138840) (0xc000660be0) Stream removed, broadcasting: 5\nI0126 12:31:14.127675    2600 log.go:172] (0xc000138840) (0xc0007b2640) Stream removed, broadcasting: 1\nI0126 12:31:14.127698    2600 log.go:172] (0xc000138840) Go away received\nI0126 12:31:14.128039    2600 log.go:172] (0xc000138840) (0xc0007b2640) Stream removed, broadcasting: 1\nI0126 12:31:14.128059    2600 log.go:172] (0xc000138840) (0xc0007b26e0) Stream removed, broadcasting: 3\nI0126 12:31:14.128071    2600 log.go:172] (0xc000138840) (0xc000660be0) Stream removed, broadcasting: 5\n"
Jan 26 12:31:14.134: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 12:31:14.134: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 12:31:14.144: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 26 12:31:24.162: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 12:31:24.162: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 12:31:24.188: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:31:24.188: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:31:24.188: INFO: 
Jan 26 12:31:24.188: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 26 12:31:25.351: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993151954s
Jan 26 12:31:26.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.830051182s
Jan 26 12:31:27.393: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.806856233s
Jan 26 12:31:28.408: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.78750185s
Jan 26 12:31:29.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.773022129s
Jan 26 12:31:30.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.763348934s
Jan 26 12:31:32.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.515289902s
Jan 26 12:31:33.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.085669533s
Jan 26 12:31:34.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 42.310216ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-x9g6f
Jan 26 12:31:35.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:31:36.136: INFO: stderr: "I0126 12:31:35.725786    2622 log.go:172] (0xc00013a580) (0xc0008c25a0) Create stream\nI0126 12:31:35.726009    2622 log.go:172] (0xc00013a580) (0xc0008c25a0) Stream added, broadcasting: 1\nI0126 12:31:35.735998    2622 log.go:172] (0xc00013a580) Reply frame received for 1\nI0126 12:31:35.736099    2622 log.go:172] (0xc00013a580) (0xc000632c80) Create stream\nI0126 12:31:35.736151    2622 log.go:172] (0xc00013a580) (0xc000632c80) Stream added, broadcasting: 3\nI0126 12:31:35.737633    2622 log.go:172] (0xc00013a580) Reply frame received for 3\nI0126 12:31:35.737672    2622 log.go:172] (0xc00013a580) (0xc0006e8000) Create stream\nI0126 12:31:35.737683    2622 log.go:172] (0xc00013a580) (0xc0006e8000) Stream added, broadcasting: 5\nI0126 12:31:35.738851    2622 log.go:172] (0xc00013a580) Reply frame received for 5\nI0126 12:31:35.917280    2622 log.go:172] (0xc00013a580) Data frame received for 3\nI0126 12:31:35.917315    2622 log.go:172] (0xc000632c80) (3) Data frame handling\nI0126 12:31:35.917326    2622 log.go:172] (0xc000632c80) (3) Data frame sent\nI0126 12:31:36.122003    2622 log.go:172] (0xc00013a580) (0xc000632c80) Stream removed, broadcasting: 3\nI0126 12:31:36.122162    2622 log.go:172] (0xc00013a580) Data frame received for 1\nI0126 12:31:36.122175    2622 log.go:172] (0xc0008c25a0) (1) Data frame handling\nI0126 12:31:36.122206    2622 log.go:172] (0xc0008c25a0) (1) Data frame sent\nI0126 12:31:36.122431    2622 log.go:172] (0xc00013a580) (0xc0008c25a0) Stream removed, broadcasting: 1\nI0126 12:31:36.122733    2622 log.go:172] (0xc00013a580) (0xc0006e8000) Stream removed, broadcasting: 5\nI0126 12:31:36.122993    2622 log.go:172] (0xc00013a580) Go away received\nI0126 12:31:36.123181    2622 log.go:172] (0xc00013a580) (0xc0008c25a0) Stream removed, broadcasting: 1\nI0126 12:31:36.123221    2622 log.go:172] (0xc00013a580) (0xc000632c80) Stream removed, broadcasting: 3\nI0126 12:31:36.123238    2622 log.go:172] (0xc00013a580) (0xc0006e8000) Stream removed, broadcasting: 5\n"
Jan 26 12:31:36.136: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 12:31:36.136: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 12:31:36.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:31:36.887: INFO: stderr: "I0126 12:31:36.394895    2643 log.go:172] (0xc00013a630) (0xc0007a6820) Create stream\nI0126 12:31:36.394991    2643 log.go:172] (0xc00013a630) (0xc0007a6820) Stream added, broadcasting: 1\nI0126 12:31:36.403754    2643 log.go:172] (0xc00013a630) Reply frame received for 1\nI0126 12:31:36.403801    2643 log.go:172] (0xc00013a630) (0xc0007ca3c0) Create stream\nI0126 12:31:36.403816    2643 log.go:172] (0xc00013a630) (0xc0007ca3c0) Stream added, broadcasting: 3\nI0126 12:31:36.405410    2643 log.go:172] (0xc00013a630) Reply frame received for 3\nI0126 12:31:36.405441    2643 log.go:172] (0xc00013a630) (0xc0007a6000) Create stream\nI0126 12:31:36.405457    2643 log.go:172] (0xc00013a630) (0xc0007a6000) Stream added, broadcasting: 5\nI0126 12:31:36.407617    2643 log.go:172] (0xc00013a630) Reply frame received for 5\nI0126 12:31:36.652862    2643 log.go:172] (0xc00013a630) Data frame received for 5\nI0126 12:31:36.653198    2643 log.go:172] (0xc0007a6000) (5) Data frame handling\nI0126 12:31:36.653222    2643 log.go:172] (0xc0007a6000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0126 12:31:36.653245    2643 log.go:172] (0xc00013a630) Data frame received for 3\nI0126 12:31:36.653253    2643 log.go:172] (0xc0007ca3c0) (3) Data frame handling\nI0126 12:31:36.653267    2643 log.go:172] (0xc0007ca3c0) (3) Data frame sent\nI0126 12:31:36.880094    2643 log.go:172] (0xc00013a630) (0xc0007ca3c0) Stream removed, broadcasting: 3\nI0126 12:31:36.880188    2643 log.go:172] (0xc00013a630) Data frame received for 1\nI0126 12:31:36.880206    2643 log.go:172] (0xc0007a6820) (1) Data frame handling\nI0126 12:31:36.880217    2643 log.go:172] (0xc0007a6820) (1) Data frame sent\nI0126 12:31:36.880271    2643 log.go:172] (0xc00013a630) (0xc0007a6820) Stream removed, broadcasting: 1\nI0126 12:31:36.880435    2643 log.go:172] (0xc00013a630) (0xc0007a6000) Stream removed, broadcasting: 5\nI0126 12:31:36.880449    2643 log.go:172] (0xc00013a630) Go away received\nI0126 12:31:36.880641    2643 log.go:172] (0xc00013a630) (0xc0007a6820) Stream removed, broadcasting: 1\nI0126 12:31:36.880667    2643 log.go:172] (0xc00013a630) (0xc0007ca3c0) Stream removed, broadcasting: 3\nI0126 12:31:36.880681    2643 log.go:172] (0xc00013a630) (0xc0007a6000) Stream removed, broadcasting: 5\n"
Jan 26 12:31:36.887: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 12:31:36.887: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 12:31:36.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:31:37.465: INFO: stderr: "I0126 12:31:37.184145    2664 log.go:172] (0xc0001aa160) (0xc00071c640) Create stream\nI0126 12:31:37.184369    2664 log.go:172] (0xc0001aa160) (0xc00071c640) Stream added, broadcasting: 1\nI0126 12:31:37.196150    2664 log.go:172] (0xc0001aa160) Reply frame received for 1\nI0126 12:31:37.196180    2664 log.go:172] (0xc0001aa160) (0xc00071c6e0) Create stream\nI0126 12:31:37.196189    2664 log.go:172] (0xc0001aa160) (0xc00071c6e0) Stream added, broadcasting: 3\nI0126 12:31:37.196941    2664 log.go:172] (0xc0001aa160) Reply frame received for 3\nI0126 12:31:37.196960    2664 log.go:172] (0xc0001aa160) (0xc0005fac80) Create stream\nI0126 12:31:37.196968    2664 log.go:172] (0xc0001aa160) (0xc0005fac80) Stream added, broadcasting: 5\nI0126 12:31:37.197657    2664 log.go:172] (0xc0001aa160) Reply frame received for 5\nI0126 12:31:37.307187    2664 log.go:172] (0xc0001aa160) Data frame received for 3\nI0126 12:31:37.307280    2664 log.go:172] (0xc00071c6e0) (3) Data frame handling\nI0126 12:31:37.307298    2664 log.go:172] (0xc00071c6e0) (3) Data frame sent\nI0126 12:31:37.307425    2664 log.go:172] (0xc0001aa160) Data frame received for 5\nI0126 12:31:37.307503    2664 log.go:172] (0xc0005fac80) (5) Data frame handling\nI0126 12:31:37.307528    2664 log.go:172] (0xc0005fac80) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0126 12:31:37.455219    2664 log.go:172] (0xc0001aa160) Data frame received for 1\nI0126 12:31:37.455305    2664 log.go:172] (0xc00071c640) (1) Data frame handling\nI0126 12:31:37.455328    2664 log.go:172] (0xc00071c640) (1) Data frame sent\nI0126 12:31:37.455661    2664 log.go:172] (0xc0001aa160) (0xc00071c640) Stream removed, broadcasting: 1\nI0126 12:31:37.456051    2664 log.go:172] (0xc0001aa160) (0xc00071c6e0) Stream removed, broadcasting: 3\nI0126 12:31:37.456505    2664 log.go:172] (0xc0001aa160) (0xc0005fac80) Stream removed, broadcasting: 5\nI0126 12:31:37.456542    2664 log.go:172] (0xc0001aa160) (0xc00071c640) Stream removed, broadcasting: 1\nI0126 12:31:37.456678    2664 log.go:172] (0xc0001aa160) (0xc00071c6e0) Stream removed, broadcasting: 3\nI0126 12:31:37.456731    2664 log.go:172] (0xc0001aa160) (0xc0005fac80) Stream removed, broadcasting: 5\n"
Jan 26 12:31:37.466: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 12:31:37.466: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 12:31:37.522: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 12:31:37.522: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 12:31:37.522: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Jan 26 12:31:47.546: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 12:31:47.546: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 12:31:47.546: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 26 12:31:47.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 12:31:48.179: INFO: stderr: "I0126 12:31:47.790223    2685 log.go:172] (0xc0005302c0) (0xc000825720) Create stream\nI0126 12:31:47.790639    2685 log.go:172] (0xc0005302c0) (0xc000825720) Stream added, broadcasting: 1\nI0126 12:31:47.809471    2685 log.go:172] (0xc0005302c0) Reply frame received for 1\nI0126 12:31:47.809594    2685 log.go:172] (0xc0005302c0) (0xc000376960) Create stream\nI0126 12:31:47.809618    2685 log.go:172] (0xc0005302c0) (0xc000376960) Stream added, broadcasting: 3\nI0126 12:31:47.815741    2685 log.go:172] (0xc0005302c0) Reply frame received for 3\nI0126 12:31:47.815796    2685 log.go:172] (0xc0005302c0) (0xc00008e1e0) Create stream\nI0126 12:31:47.815815    2685 log.go:172] (0xc0005302c0) (0xc00008e1e0) Stream added, broadcasting: 5\nI0126 12:31:47.818090    2685 log.go:172] (0xc0005302c0) Reply frame received for 5\nI0126 12:31:47.990671    2685 log.go:172] (0xc0005302c0) Data frame received for 3\nI0126 12:31:47.990788    2685 log.go:172] (0xc000376960) (3) Data frame handling\nI0126 12:31:47.990874    2685 log.go:172] (0xc000376960) (3) Data frame sent\nI0126 12:31:48.171142    2685 log.go:172] (0xc0005302c0) (0xc000376960) Stream removed, broadcasting: 3\nI0126 12:31:48.171337    2685 log.go:172] (0xc0005302c0) Data frame received for 1\nI0126 12:31:48.171355    2685 log.go:172] (0xc000825720) (1) Data frame handling\nI0126 12:31:48.171380    2685 log.go:172] (0xc000825720) (1) Data frame sent\nI0126 12:31:48.171389    2685 log.go:172] (0xc0005302c0) (0xc000825720) Stream removed, broadcasting: 1\nI0126 12:31:48.171692    2685 log.go:172] (0xc0005302c0) (0xc00008e1e0) Stream removed, broadcasting: 5\nI0126 12:31:48.171816    2685 log.go:172] (0xc0005302c0) (0xc000825720) Stream removed, broadcasting: 1\nI0126 12:31:48.171827    2685 log.go:172] (0xc0005302c0) (0xc000376960) Stream removed, broadcasting: 3\nI0126 12:31:48.171837    2685 log.go:172] (0xc0005302c0) (0xc00008e1e0) Stream removed, broadcasting: 5\nI0126 12:31:48.171858    2685 log.go:172] (0xc0005302c0) Go away received\n"
Jan 26 12:31:48.179: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 12:31:48.179: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 12:31:48.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 12:31:48.857: INFO: stderr: "I0126 12:31:48.341608    2707 log.go:172] (0xc00013a160) (0xc00063cdc0) Create stream\nI0126 12:31:48.341819    2707 log.go:172] (0xc00013a160) (0xc00063cdc0) Stream added, broadcasting: 1\nI0126 12:31:48.347547    2707 log.go:172] (0xc00013a160) Reply frame received for 1\nI0126 12:31:48.347577    2707 log.go:172] (0xc00013a160) (0xc0000ca320) Create stream\nI0126 12:31:48.347584    2707 log.go:172] (0xc00013a160) (0xc0000ca320) Stream added, broadcasting: 3\nI0126 12:31:48.348743    2707 log.go:172] (0xc00013a160) Reply frame received for 3\nI0126 12:31:48.348767    2707 log.go:172] (0xc00013a160) (0xc000644dc0) Create stream\nI0126 12:31:48.348779    2707 log.go:172] (0xc00013a160) (0xc000644dc0) Stream added, broadcasting: 5\nI0126 12:31:48.349904    2707 log.go:172] (0xc00013a160) Reply frame received for 5\nI0126 12:31:48.571697    2707 log.go:172] (0xc00013a160) Data frame received for 3\nI0126 12:31:48.571923    2707 log.go:172] (0xc0000ca320) (3) Data frame handling\nI0126 12:31:48.571982    2707 log.go:172] (0xc0000ca320) (3) Data frame sent\nI0126 12:31:48.847549    2707 log.go:172] (0xc00013a160) Data frame received for 1\nI0126 12:31:48.847668    2707 log.go:172] (0xc00063cdc0) (1) Data frame handling\nI0126 12:31:48.847717    2707 log.go:172] (0xc00063cdc0) (1) Data frame sent\nI0126 12:31:48.847737    2707 log.go:172] (0xc00013a160) (0xc00063cdc0) Stream removed, broadcasting: 1\nI0126 12:31:48.849384    2707 log.go:172] (0xc00013a160) (0xc0000ca320) Stream removed, broadcasting: 3\nI0126 12:31:48.849462    2707 log.go:172] (0xc00013a160) (0xc000644dc0) Stream removed, broadcasting: 5\nI0126 12:31:48.849485    2707 log.go:172] (0xc00013a160) Go away received\nI0126 12:31:48.849878    2707 log.go:172] (0xc00013a160) (0xc00063cdc0) Stream removed, broadcasting: 1\nI0126 12:31:48.849901    2707 log.go:172] (0xc00013a160) (0xc0000ca320) Stream removed, broadcasting: 3\nI0126 12:31:48.849914    2707 log.go:172] (0xc00013a160) (0xc000644dc0) Stream removed, broadcasting: 5\n"
Jan 26 12:31:48.857: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 12:31:48.857: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 12:31:48.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 12:31:49.518: INFO: stderr: "I0126 12:31:49.142575    2729 log.go:172] (0xc000646000) (0xc000802dc0) Create stream\nI0126 12:31:49.142762    2729 log.go:172] (0xc000646000) (0xc000802dc0) Stream added, broadcasting: 1\nI0126 12:31:49.148251    2729 log.go:172] (0xc000646000) Reply frame received for 1\nI0126 12:31:49.148300    2729 log.go:172] (0xc000646000) (0xc000800000) Create stream\nI0126 12:31:49.148312    2729 log.go:172] (0xc000646000) (0xc000800000) Stream added, broadcasting: 3\nI0126 12:31:49.149636    2729 log.go:172] (0xc000646000) Reply frame received for 3\nI0126 12:31:49.149658    2729 log.go:172] (0xc000646000) (0xc0007d0000) Create stream\nI0126 12:31:49.149665    2729 log.go:172] (0xc000646000) (0xc0007d0000) Stream added, broadcasting: 5\nI0126 12:31:49.152722    2729 log.go:172] (0xc000646000) Reply frame received for 5\nI0126 12:31:49.404254    2729 log.go:172] (0xc000646000) Data frame received for 3\nI0126 12:31:49.404279    2729 log.go:172] (0xc000800000) (3) Data frame handling\nI0126 12:31:49.404289    2729 log.go:172] (0xc000800000) (3) Data frame sent\nI0126 12:31:49.510575    2729 log.go:172] (0xc000646000) (0xc000800000) Stream removed, broadcasting: 3\nI0126 12:31:49.510687    2729 log.go:172] (0xc000646000) Data frame received for 1\nI0126 12:31:49.510722    2729 log.go:172] (0xc000802dc0) (1) Data frame handling\nI0126 12:31:49.510739    2729 log.go:172] (0xc000646000) (0xc0007d0000) Stream removed, broadcasting: 5\nI0126 12:31:49.510776    2729 log.go:172] (0xc000802dc0) (1) Data frame sent\nI0126 12:31:49.510782    2729 log.go:172] (0xc000646000) (0xc000802dc0) Stream removed, broadcasting: 1\nI0126 12:31:49.510977    2729 log.go:172] (0xc000646000) Go away received\nI0126 12:31:49.511080    2729 log.go:172] (0xc000646000) (0xc000802dc0) Stream removed, broadcasting: 1\nI0126 12:31:49.511103    2729 log.go:172] (0xc000646000) (0xc000800000) Stream removed, broadcasting: 3\nI0126 12:31:49.511117    2729 log.go:172] (0xc000646000) (0xc0007d0000) Stream removed, broadcasting: 5\n"
Jan 26 12:31:49.519: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 12:31:49.519: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 12:31:49.519: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 12:31:49.535: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 26 12:31:59.558: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 12:31:59.558: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 12:31:59.558: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 12:31:59.604: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:31:59.604: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:31:59.604: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:31:59.604: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:31:59.604: INFO: 
Jan 26 12:31:59.604: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:00.632: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:00.632: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:00.632: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:00.632: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:00.632: INFO: 
Jan 26 12:32:00.632: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:02.225: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:02.225: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:02.225: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:02.225: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:02.225: INFO: 
Jan 26 12:32:02.225: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:03.247: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:03.247: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:03.247: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:03.247: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:03.248: INFO: 
Jan 26 12:32:03.248: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:04.275: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:04.275: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:04.275: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:04.275: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:04.275: INFO: 
Jan 26 12:32:04.275: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:05.290: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:05.290: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:05.290: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:05.290: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:05.290: INFO: 
Jan 26 12:32:05.290: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:06.320: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:06.320: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:06.321: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:06.321: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:06.321: INFO: 
Jan 26 12:32:06.321: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:07.336: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:07.336: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:07.336: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:07.336: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:07.336: INFO: 
Jan 26 12:32:07.336: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:08.343: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:08.344: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:08.344: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:08.344: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:08.344: INFO: 
Jan 26 12:32:08.344: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 12:32:09.354: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 26 12:32:09.354: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:03 +0000 UTC  }]
Jan 26 12:32:09.354: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:09.354: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:31:24 +0000 UTC  }]
Jan 26 12:32:09.354: INFO: 
Jan 26 12:32:09.354: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-x9g6f
Jan 26 12:32:10.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:32:10.830: INFO: rc: 1
Jan 26 12:32:10.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001dc9b90 exit status 1   true [0xc0018cc078 0xc0018cc090 0xc0018cc0a8] [0xc0018cc078 0xc0018cc090 0xc0018cc0a8] [0xc0018cc088 0xc0018cc0a0] [0x935700 0x935700] 0xc002090f60 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 26 12:32:20.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:32:20.986: INFO: rc: 1
Jan 26 12:32:20.987: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ed41b0 exit status 1   true [0xc000b52000 0xc000b52018 0xc000b52030] [0xc000b52000 0xc000b52018 0xc000b52030] [0xc000b52010 0xc000b52028] [0x935700 0x935700] 0xc001c463c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:32:30.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:32:31.192: INFO: rc: 1
Jan 26 12:32:31.193: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ed4330 exit status 1   true [0xc000b52038 0xc000b52050 0xc000b52068] [0xc000b52038 0xc000b52050 0xc000b52068] [0xc000b52048 0xc000b52060] [0x935700 0x935700] 0xc001c466c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:32:41.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:32:41.346: INFO: rc: 1
Jan 26 12:32:41.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001dc9d10 exit status 1   true [0xc0018cc0b0 0xc0018cc0c8 0xc0018cc0e0] [0xc0018cc0b0 0xc0018cc0c8 0xc0018cc0e0] [0xc0018cc0c0 0xc0018cc0d8] [0x935700 0x935700] 0xc002091200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:32:51.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:32:51.507: INFO: rc: 1
Jan 26 12:32:51.507: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ed4510 exit status 1   true [0xc000b52070 0xc000b52088 0xc000b520a0] [0xc000b52070 0xc000b52088 0xc000b520a0] [0xc000b52080 0xc000b52098] [0x935700 0x935700] 0xc001c46a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:33:01.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:33:01.689: INFO: rc: 1
Jan 26 12:33:01.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ed4660 exit status 1   true [0xc000b520a8 0xc000b520c0 0xc000b520d8] [0xc000b520a8 0xc000b520c0 0xc000b520d8] [0xc000b520b8 0xc000b520d0] [0x935700 0x935700] 0xc001c46f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:33:11.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:33:11.913: INFO: rc: 1
Jan 26 12:33:11.913: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218a5d0 exit status 1   true [0xc000b48c38 0xc000b48d50 0xc000b48db8] [0xc000b48c38 0xc000b48d50 0xc000b48db8] [0xc000b48d18 0xc000b48da8] [0x935700 0x935700] 0xc000a13200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:33:21.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:33:22.106: INFO: rc: 1
Jan 26 12:33:22.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218a6f0 exit status 1   true [0xc000b48dc0 0xc000b48e20 0xc000b48eb8] [0xc000b48dc0 0xc000b48e20 0xc000b48eb8] [0xc000b48df0 0xc000b48ea8] [0x935700 0x935700] 0xc000a13a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:33:32.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:33:32.240: INFO: rc: 1
Jan 26 12:33:32.240: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c58510 exit status 1   true [0xc001b480a0 0xc001b480b8 0xc001b480d0] [0xc001b480a0 0xc001b480b8 0xc001b480d0] [0xc001b480b0 0xc001b480c8] [0x935700 0x935700] 0xc001838fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:33:42.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:33:42.430: INFO: rc: 1
Jan 26 12:33:42.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218aa20 exit status 1   true [0xc000b48ec0 0xc000b48f00 0xc000b48f18] [0xc000b48ec0 0xc000b48f00 0xc000b48f18] [0xc000b48ef8 0xc000b48f10] [0x935700 0x935700] 0xc000a13f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:33:52.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:33:52.695: INFO: rc: 1
Jan 26 12:33:52.696: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ed4780 exit status 1   true [0xc000b520e0 0xc000b520f8 0xc000b52110] [0xc000b520e0 0xc000b520f8 0xc000b52110] [0xc000b520f0 0xc000b52108] [0x935700 0x935700] 0xc001c47260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:34:02.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:34:02.815: INFO: rc: 1
Jan 26 12:34:02.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00157e120 exit status 1   true [0xc000b48030 0xc000b48268 0xc000b48330] [0xc000b48030 0xc000b48268 0xc000b48330] [0xc000b48198 0xc000b482f8] [0x935700 0x935700] 0xc000a124e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:34:12.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:34:12.969: INFO: rc: 1
Jan 26 12:34:12.970: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00157e660 exit status 1   true [0xc000b48338 0xc000b484e8 0xc000b486c0] [0xc000b48338 0xc000b484e8 0xc000b486c0] [0xc000b48490 0xc000b48668] [0x935700 0x935700] 0xc000a130e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:34:22.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:34:23.131: INFO: rc: 1
Jan 26 12:34:23.132: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00192e1e0 exit status 1   true [0xc000b52000 0xc000b52018 0xc000b52030] [0xc000b52000 0xc000b52018 0xc000b52030] [0xc000b52010 0xc000b52028] [0x935700 0x935700] 0xc001c40480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:34:33.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:34:33.279: INFO: rc: 1
Jan 26 12:34:33.279: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00157ebd0 exit status 1   true [0xc000b486f8 0xc000b487e0 0xc000b48860] [0xc000b486f8 0xc000b487e0 0xc000b48860] [0xc000b487a8 0xc000b48838] [0x935700 0x935700] 0xc000a13800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:34:43.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:34:43.429: INFO: rc: 1
Jan 26 12:34:43.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00157ed50 exit status 1   true [0xc000b48878 0xc000b488f0 0xc000b48988] [0xc000b48878 0xc000b488f0 0xc000b48988] [0xc000b488b8 0xc000b48960] [0x935700 0x935700] 0xc000a13e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:34:53.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:34:53.603: INFO: rc: 1
Jan 26 12:34:53.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ed41e0 exit status 1   true [0xc0018cc000 0xc0018cc018 0xc0018cc030] [0xc0018cc000 0xc0018cc018 0xc0018cc030] [0xc0018cc010 0xc0018cc028] [0x935700 0x935700] 0xc001c463c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:35:03.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:35:03.722: INFO: rc: 1
Jan 26 12:35:03.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218a1b0 exit status 1   true [0xc001b48000 0xc001b48018 0xc001b48030] [0xc001b48000 0xc001b48018 0xc001b48030] [0xc001b48010 0xc001b48028] [0x935700 0x935700] 0xc0020908a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:35:13.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:35:13.874: INFO: rc: 1
Jan 26 12:35:13.874: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218a2d0 exit status 1   true [0xc001b48038 0xc001b48050 0xc001b48068] [0xc001b48038 0xc001b48050 0xc001b48068] [0xc001b48048 0xc001b48060] [0x935700 0x935700] 0xc002090b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:35:23.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:35:24.046: INFO: rc: 1
Jan 26 12:35:24.046: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00157eea0 exit status 1   true [0xc000b48998 0xc000b489b8 0xc000b48a10] [0xc000b48998 0xc000b489b8 0xc000b48a10] [0xc000b489b0 0xc000b489f8] [0x935700 0x935700] 0xc001838240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:35:34.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:35:34.142: INFO: rc: 1
Jan 26 12:35:34.142: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ed4390 exit status 1   true [0xc0018cc038 0xc0018cc050 0xc0018cc068] [0xc0018cc038 0xc0018cc050 0xc0018cc068] [0xc0018cc048 0xc0018cc060] [0x935700 0x935700] 0xc001c466c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:35:44.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:35:44.265: INFO: rc: 1
Jan 26 12:35:44.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218a450 exit status 1   true [0xc001b48070 0xc001b48088 0xc001b480a0] [0xc001b48070 0xc001b48088 0xc001b480a0] [0xc001b48080 0xc001b48098] [0x935700 0x935700] 0xc002090f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:35:54.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:35:54.408: INFO: rc: 1
Jan 26 12:35:54.408: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00157eff0 exit status 1   true [0xc000b48a38 0xc000b48ba0 0xc000b48c88] [0xc000b48a38 0xc000b48ba0 0xc000b48c88] [0xc000b48b28 0xc000b48c38] [0x935700 0x935700] 0xc001838660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:36:04.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:36:04.629: INFO: rc: 1
Jan 26 12:36:04.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00192e300 exit status 1   true [0xc000b52040 0xc000b52058 0xc000b52070] [0xc000b52040 0xc000b52058 0xc000b52070] [0xc000b52050 0xc000b52068] [0x935700 0x935700] 0xc0020a19e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:36:14.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:36:14.832: INFO: rc: 1
Jan 26 12:36:14.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218a180 exit status 1   true [0xc000176000 0xc000b52010 0xc000b52028] [0xc000176000 0xc000b52010 0xc000b52028] [0xc000b52008 0xc000b52020] [0x935700 0x935700] 0xc001c40480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:36:24.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:36:25.003: INFO: rc: 1
Jan 26 12:36:25.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00192e330 exit status 1   true [0xc001b48000 0xc001b48018 0xc001b48030] [0xc001b48000 0xc001b48018 0xc001b48030] [0xc001b48010 0xc001b48028] [0x935700 0x935700] 0xc0020a1c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:36:35.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:36:35.115: INFO: rc: 1
Jan 26 12:36:35.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218a330 exit status 1   true [0xc000b52030 0xc000b52088 0xc000b520a0] [0xc000b52030 0xc000b52088 0xc000b520a0] [0xc000b52080 0xc000b52098] [0x935700 0x935700] 0xc000a126c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:36:45.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:36:45.270: INFO: rc: 1
Jan 26 12:36:45.270: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00218a4b0 exit status 1   true [0xc000b520a8 0xc000b520c0 0xc000b520d8] [0xc000b520a8 0xc000b520c0 0xc000b520d8] [0xc000b520b8 0xc000b520d0] [0x935700 0x935700] 0xc000a13200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:36:55.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:36:55.432: INFO: rc: 1
Jan 26 12:36:55.432: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00192e450 exit status 1   true [0xc001b48038 0xc001b48050 0xc001b48068] [0xc001b48038 0xc001b48050 0xc001b48068] [0xc001b48048 0xc001b48060] [0x935700 0x935700] 0xc002090840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:37:05.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:37:05.577: INFO: rc: 1
Jan 26 12:37:05.577: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00157e150 exit status 1   true [0xc000b48030 0xc000b48268 0xc000b48330] [0xc000b48030 0xc000b48268 0xc000b48330] [0xc000b48198 0xc000b482f8] [0x935700 0x935700] 0xc001838300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 26 12:37:15.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9g6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 12:37:15.738: INFO: rc: 1
Jan 26 12:37:15.739: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 26 12:37:15.739: INFO: Scaling statefulset ss to 0
Jan 26 12:37:15.766: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 26 12:37:15.773: INFO: Deleting all statefulset in ns e2e-tests-statefulset-x9g6f
Jan 26 12:37:15.780: INFO: Scaling statefulset ss to 0
Jan 26 12:37:15.809: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 12:37:15.814: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:37:15.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-x9g6f" for this suite.
Jan 26 12:37:23.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:37:24.034: INFO: namespace: e2e-tests-statefulset-x9g6f, resource: bindings, ignored listing per whitelist
Jan 26 12:37:24.133: INFO: namespace e2e-tests-statefulset-x9g6f deletion completed in 8.215016024s

• [SLOW TEST:380.930 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:37:24.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 12:37:34.800: INFO: Waiting up to 5m0s for pod "client-envvars-a0c0a424-4038-11ea-b664-0242ac110005" in namespace "e2e-tests-pods-22wmf" to be "success or failure"
Jan 26 12:37:35.098: INFO: Pod "client-envvars-a0c0a424-4038-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 298.223274ms
Jan 26 12:37:37.151: INFO: Pod "client-envvars-a0c0a424-4038-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351258107s
Jan 26 12:37:39.163: INFO: Pod "client-envvars-a0c0a424-4038-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363049571s
Jan 26 12:37:41.363: INFO: Pod "client-envvars-a0c0a424-4038-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562297371s
Jan 26 12:37:43.749: INFO: Pod "client-envvars-a0c0a424-4038-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.948779903s
Jan 26 12:37:45.767: INFO: Pod "client-envvars-a0c0a424-4038-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.966615207s
STEP: Saw pod success
Jan 26 12:37:45.767: INFO: Pod "client-envvars-a0c0a424-4038-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:37:45.772: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-a0c0a424-4038-11ea-b664-0242ac110005 container env3cont: 
STEP: delete the pod
Jan 26 12:37:46.368: INFO: Waiting for pod client-envvars-a0c0a424-4038-11ea-b664-0242ac110005 to disappear
Jan 26 12:37:46.576: INFO: Pod client-envvars-a0c0a424-4038-11ea-b664-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:37:46.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-22wmf" for this suite.
Jan 26 12:38:34.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:38:34.829: INFO: namespace: e2e-tests-pods-22wmf, resource: bindings, ignored listing per whitelist
Jan 26 12:38:34.943: INFO: namespace e2e-tests-pods-22wmf deletion completed in 48.34009369s

• [SLOW TEST:70.810 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:38:34.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:38:45.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-55hlb" for this suite.
Jan 26 12:39:27.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:39:27.680: INFO: namespace: e2e-tests-kubelet-test-55hlb, resource: bindings, ignored listing per whitelist
Jan 26 12:39:27.698: INFO: namespace e2e-tests-kubelet-test-55hlb deletion completed in 42.322683964s

• [SLOW TEST:52.754 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:39:27.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-sszrq
I0126 12:39:28.044002       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-sszrq, replica count: 1
I0126 12:39:29.094934       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:39:30.095263       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:39:31.095599       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:39:32.096344       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:39:33.096784       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:39:34.097162       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:39:35.097502       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:39:36.097777       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:39:37.098174       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 26 12:39:37.231: INFO: Created: latency-svc-thbf6
Jan 26 12:39:37.286: INFO: Got endpoints: latency-svc-thbf6 [87.720269ms]
Jan 26 12:39:37.528: INFO: Created: latency-svc-jtndn
Jan 26 12:39:37.552: INFO: Got endpoints: latency-svc-jtndn [265.434492ms]
Jan 26 12:39:37.606: INFO: Created: latency-svc-r6lfl
Jan 26 12:39:37.799: INFO: Got endpoints: latency-svc-r6lfl [512.161698ms]
Jan 26 12:39:37.840: INFO: Created: latency-svc-xhw88
Jan 26 12:39:37.869: INFO: Got endpoints: latency-svc-xhw88 [581.750475ms]
Jan 26 12:39:38.022: INFO: Created: latency-svc-jl72h
Jan 26 12:39:38.035: INFO: Got endpoints: latency-svc-jl72h [747.000739ms]
Jan 26 12:39:38.333: INFO: Created: latency-svc-n2qct
Jan 26 12:39:38.368: INFO: Got endpoints: latency-svc-n2qct [1.080051029s]
Jan 26 12:39:38.607: INFO: Created: latency-svc-pg6jq
Jan 26 12:39:38.956: INFO: Got endpoints: latency-svc-pg6jq [1.667924448s]
Jan 26 12:39:38.998: INFO: Created: latency-svc-xptct
Jan 26 12:39:39.279: INFO: Got endpoints: latency-svc-xptct [1.991151309s]
Jan 26 12:39:39.327: INFO: Created: latency-svc-vrlt2
Jan 26 12:39:39.359: INFO: Got endpoints: latency-svc-vrlt2 [2.071543899s]
Jan 26 12:39:39.559: INFO: Created: latency-svc-lrszg
Jan 26 12:39:39.719: INFO: Got endpoints: latency-svc-lrszg [2.431486723s]
Jan 26 12:39:39.732: INFO: Created: latency-svc-64q56
Jan 26 12:39:39.747: INFO: Got endpoints: latency-svc-64q56 [2.459201505s]
Jan 26 12:39:40.100: INFO: Created: latency-svc-dcl6c
Jan 26 12:39:40.410: INFO: Got endpoints: latency-svc-dcl6c [3.121778698s]
Jan 26 12:39:40.424: INFO: Created: latency-svc-hr88r
Jan 26 12:39:40.480: INFO: Got endpoints: latency-svc-hr88r [3.192808413s]
Jan 26 12:39:40.883: INFO: Created: latency-svc-pcn2w
Jan 26 12:39:41.149: INFO: Got endpoints: latency-svc-pcn2w [3.861680804s]
Jan 26 12:39:41.189: INFO: Created: latency-svc-8cp6t
Jan 26 12:39:41.201: INFO: Got endpoints: latency-svc-8cp6t [3.91384382s]
Jan 26 12:39:41.336: INFO: Created: latency-svc-fjpwf
Jan 26 12:39:41.359: INFO: Got endpoints: latency-svc-fjpwf [4.071490205s]
Jan 26 12:39:41.384: INFO: Created: latency-svc-mcxhq
Jan 26 12:39:41.400: INFO: Got endpoints: latency-svc-mcxhq [3.84727982s]
Jan 26 12:39:41.574: INFO: Created: latency-svc-cdtck
Jan 26 12:39:41.583: INFO: Got endpoints: latency-svc-cdtck [3.783626702s]
Jan 26 12:39:41.641: INFO: Created: latency-svc-klnsl
Jan 26 12:39:41.783: INFO: Got endpoints: latency-svc-klnsl [3.914115532s]
Jan 26 12:39:41.827: INFO: Created: latency-svc-wpb5t
Jan 26 12:39:41.869: INFO: Got endpoints: latency-svc-wpb5t [3.833623515s]
Jan 26 12:39:42.016: INFO: Created: latency-svc-lsdnt
Jan 26 12:39:42.041: INFO: Got endpoints: latency-svc-lsdnt [3.672877027s]
Jan 26 12:39:42.077: INFO: Created: latency-svc-jw22s
Jan 26 12:39:42.186: INFO: Got endpoints: latency-svc-jw22s [3.229979002s]
Jan 26 12:39:42.241: INFO: Created: latency-svc-bsq8b
Jan 26 12:39:42.395: INFO: Got endpoints: latency-svc-bsq8b [3.116299997s]
Jan 26 12:39:42.420: INFO: Created: latency-svc-w47q8
Jan 26 12:39:42.441: INFO: Got endpoints: latency-svc-w47q8 [3.081840729s]
Jan 26 12:39:42.672: INFO: Created: latency-svc-lbhh6
Jan 26 12:39:42.724: INFO: Got endpoints: latency-svc-lbhh6 [3.004697301s]
Jan 26 12:39:42.950: INFO: Created: latency-svc-v9vwt
Jan 26 12:39:42.972: INFO: Got endpoints: latency-svc-v9vwt [3.225418405s]
Jan 26 12:39:43.206: INFO: Created: latency-svc-zspjb
Jan 26 12:39:43.225: INFO: Got endpoints: latency-svc-zspjb [2.815538218s]
Jan 26 12:39:43.378: INFO: Created: latency-svc-zwccp
Jan 26 12:39:43.391: INFO: Got endpoints: latency-svc-zwccp [2.910101308s]
Jan 26 12:39:43.454: INFO: Created: latency-svc-s2mzt
Jan 26 12:39:43.462: INFO: Got endpoints: latency-svc-s2mzt [2.312188203s]
Jan 26 12:39:43.648: INFO: Created: latency-svc-kqhjg
Jan 26 12:39:43.681: INFO: Got endpoints: latency-svc-kqhjg [2.479661078s]
Jan 26 12:39:43.734: INFO: Created: latency-svc-8gzh7
Jan 26 12:39:43.752: INFO: Got endpoints: latency-svc-8gzh7 [2.392179869s]
Jan 26 12:39:43.995: INFO: Created: latency-svc-x88xn
Jan 26 12:39:44.003: INFO: Got endpoints: latency-svc-x88xn [2.602973738s]
Jan 26 12:39:44.173: INFO: Created: latency-svc-45wwf
Jan 26 12:39:44.198: INFO: Got endpoints: latency-svc-45wwf [2.615337743s]
Jan 26 12:39:44.279: INFO: Created: latency-svc-6f988
Jan 26 12:39:44.403: INFO: Got endpoints: latency-svc-6f988 [2.619841524s]
Jan 26 12:39:44.431: INFO: Created: latency-svc-cmhbj
Jan 26 12:39:44.447: INFO: Got endpoints: latency-svc-cmhbj [2.577757306s]
Jan 26 12:39:44.601: INFO: Created: latency-svc-jv4n8
Jan 26 12:39:44.651: INFO: Got endpoints: latency-svc-jv4n8 [2.609710447s]
Jan 26 12:39:44.793: INFO: Created: latency-svc-mp2ps
Jan 26 12:39:44.815: INFO: Got endpoints: latency-svc-mp2ps [2.628700774s]
Jan 26 12:39:45.076: INFO: Created: latency-svc-jjj5j
Jan 26 12:39:45.095: INFO: Got endpoints: latency-svc-jjj5j [2.699981838s]
Jan 26 12:39:45.329: INFO: Created: latency-svc-2blb5
Jan 26 12:39:45.374: INFO: Got endpoints: latency-svc-2blb5 [2.932460671s]
Jan 26 12:39:45.565: INFO: Created: latency-svc-58j4k
Jan 26 12:39:45.591: INFO: Got endpoints: latency-svc-58j4k [2.866629726s]
Jan 26 12:39:45.643: INFO: Created: latency-svc-6dnpd
Jan 26 12:39:45.827: INFO: Got endpoints: latency-svc-6dnpd [2.854672994s]
Jan 26 12:39:45.849: INFO: Created: latency-svc-zdcmm
Jan 26 12:39:45.869: INFO: Got endpoints: latency-svc-zdcmm [2.643770495s]
Jan 26 12:39:46.010: INFO: Created: latency-svc-bd99v
Jan 26 12:39:46.028: INFO: Got endpoints: latency-svc-bd99v [2.637463952s]
Jan 26 12:39:46.098: INFO: Created: latency-svc-qqrjs
Jan 26 12:39:46.238: INFO: Got endpoints: latency-svc-qqrjs [2.776003959s]
Jan 26 12:39:46.277: INFO: Created: latency-svc-4xzrx
Jan 26 12:39:46.294: INFO: Got endpoints: latency-svc-4xzrx [2.613567702s]
Jan 26 12:39:46.485: INFO: Created: latency-svc-9bt7z
Jan 26 12:39:46.532: INFO: Got endpoints: latency-svc-9bt7z [293.976846ms]
Jan 26 12:39:46.728: INFO: Created: latency-svc-lhw9d
Jan 26 12:39:46.752: INFO: Got endpoints: latency-svc-lhw9d [3.000019761s]
Jan 26 12:39:46.831: INFO: Created: latency-svc-9dr84
Jan 26 12:39:47.088: INFO: Got endpoints: latency-svc-9dr84 [3.085326994s]
Jan 26 12:39:47.117: INFO: Created: latency-svc-4hcls
Jan 26 12:39:47.198: INFO: Got endpoints: latency-svc-4hcls [2.999146921s]
Jan 26 12:39:47.355: INFO: Created: latency-svc-8vpwc
Jan 26 12:39:47.400: INFO: Got endpoints: latency-svc-8vpwc [2.996293899s]
Jan 26 12:39:47.552: INFO: Created: latency-svc-c5zfp
Jan 26 12:39:47.587: INFO: Got endpoints: latency-svc-c5zfp [3.139904317s]
Jan 26 12:39:47.745: INFO: Created: latency-svc-5vwgh
Jan 26 12:39:47.767: INFO: Got endpoints: latency-svc-5vwgh [3.116082565s]
Jan 26 12:39:47.956: INFO: Created: latency-svc-4qtcl
Jan 26 12:39:47.981: INFO: Got endpoints: latency-svc-4qtcl [3.165872276s]
Jan 26 12:39:48.191: INFO: Created: latency-svc-56q4j
Jan 26 12:39:48.235: INFO: Got endpoints: latency-svc-56q4j [3.140027707s]
Jan 26 12:39:48.383: INFO: Created: latency-svc-g7bgw
Jan 26 12:39:48.387: INFO: Got endpoints: latency-svc-g7bgw [3.012660595s]
Jan 26 12:39:48.438: INFO: Created: latency-svc-9psth
Jan 26 12:39:48.590: INFO: Got endpoints: latency-svc-9psth [2.9984907s]
Jan 26 12:39:48.616: INFO: Created: latency-svc-6j4wg
Jan 26 12:39:48.667: INFO: Got endpoints: latency-svc-6j4wg [2.839861888s]
Jan 26 12:39:48.996: INFO: Created: latency-svc-w2ccj
Jan 26 12:39:49.032: INFO: Got endpoints: latency-svc-w2ccj [3.162687714s]
Jan 26 12:39:49.346: INFO: Created: latency-svc-p5h6v
Jan 26 12:39:49.373: INFO: Got endpoints: latency-svc-p5h6v [3.344697902s]
Jan 26 12:39:49.574: INFO: Created: latency-svc-p7mwp
Jan 26 12:39:49.608: INFO: Got endpoints: latency-svc-p7mwp [3.31364076s]
Jan 26 12:39:49.917: INFO: Created: latency-svc-mkn7l
Jan 26 12:39:50.164: INFO: Got endpoints: latency-svc-mkn7l [3.631607539s]
Jan 26 12:39:50.265: INFO: Created: latency-svc-r8t64
Jan 26 12:39:50.397: INFO: Got endpoints: latency-svc-r8t64 [3.644640134s]
Jan 26 12:39:50.444: INFO: Created: latency-svc-zrwtn
Jan 26 12:39:50.454: INFO: Got endpoints: latency-svc-zrwtn [3.366228927s]
Jan 26 12:39:50.641: INFO: Created: latency-svc-jfnv9
Jan 26 12:39:50.673: INFO: Got endpoints: latency-svc-jfnv9 [3.475357247s]
Jan 26 12:39:50.895: INFO: Created: latency-svc-5wc6c
Jan 26 12:39:50.937: INFO: Got endpoints: latency-svc-5wc6c [3.537047669s]
Jan 26 12:39:51.084: INFO: Created: latency-svc-m4vqh
Jan 26 12:39:51.099: INFO: Got endpoints: latency-svc-m4vqh [3.511576337s]
Jan 26 12:39:51.322: INFO: Created: latency-svc-kwk2w
Jan 26 12:39:51.483: INFO: Got endpoints: latency-svc-kwk2w [3.715354748s]
Jan 26 12:39:51.588: INFO: Created: latency-svc-pvwgz
Jan 26 12:39:51.718: INFO: Got endpoints: latency-svc-pvwgz [3.736683261s]
Jan 26 12:39:51.742: INFO: Created: latency-svc-fcg4l
Jan 26 12:39:51.788: INFO: Got endpoints: latency-svc-fcg4l [3.551823202s]
Jan 26 12:39:51.927: INFO: Created: latency-svc-sdhhn
Jan 26 12:39:51.976: INFO: Got endpoints: latency-svc-sdhhn [3.58945591s]
Jan 26 12:39:51.993: INFO: Created: latency-svc-vvrrc
Jan 26 12:39:52.004: INFO: Got endpoints: latency-svc-vvrrc [3.413966135s]
Jan 26 12:39:52.184: INFO: Created: latency-svc-cvxcz
Jan 26 12:39:52.254: INFO: Got endpoints: latency-svc-cvxcz [3.586633936s]
Jan 26 12:39:52.295: INFO: Created: latency-svc-rh5b8
Jan 26 12:39:52.346: INFO: Got endpoints: latency-svc-rh5b8 [3.313242814s]
Jan 26 12:39:52.575: INFO: Created: latency-svc-fqwpc
Jan 26 12:39:52.589: INFO: Got endpoints: latency-svc-fqwpc [3.216156731s]
Jan 26 12:39:52.743: INFO: Created: latency-svc-vmpk4
Jan 26 12:39:52.767: INFO: Got endpoints: latency-svc-vmpk4 [3.15814486s]
Jan 26 12:39:52.975: INFO: Created: latency-svc-zrdlg
Jan 26 12:39:52.982: INFO: Got endpoints: latency-svc-zrdlg [2.818448815s]
Jan 26 12:39:53.067: INFO: Created: latency-svc-zddhh
Jan 26 12:39:53.175: INFO: Got endpoints: latency-svc-zddhh [2.778079641s]
Jan 26 12:39:53.256: INFO: Created: latency-svc-v47wr
Jan 26 12:39:53.382: INFO: Got endpoints: latency-svc-v47wr [2.927874263s]
Jan 26 12:39:53.403: INFO: Created: latency-svc-nlhxp
Jan 26 12:39:53.426: INFO: Got endpoints: latency-svc-nlhxp [2.752613981s]
Jan 26 12:39:53.474: INFO: Created: latency-svc-kpvpz
Jan 26 12:39:53.557: INFO: Got endpoints: latency-svc-kpvpz [2.619795639s]
Jan 26 12:39:53.615: INFO: Created: latency-svc-ctkcq
Jan 26 12:39:53.865: INFO: Got endpoints: latency-svc-ctkcq [2.766058368s]
Jan 26 12:39:53.943: INFO: Created: latency-svc-lw8sp
Jan 26 12:39:54.041: INFO: Got endpoints: latency-svc-lw8sp [2.558397515s]
Jan 26 12:39:54.072: INFO: Created: latency-svc-cwp4t
Jan 26 12:39:54.107: INFO: Got endpoints: latency-svc-cwp4t [2.388972463s]
Jan 26 12:39:54.290: INFO: Created: latency-svc-cjl9h
Jan 26 12:39:54.340: INFO: Got endpoints: latency-svc-cjl9h [2.552365326s]
Jan 26 12:39:54.441: INFO: Created: latency-svc-ns8hn
Jan 26 12:39:54.506: INFO: Got endpoints: latency-svc-ns8hn [2.530030325s]
Jan 26 12:39:54.513: INFO: Created: latency-svc-t9wjk
Jan 26 12:39:54.694: INFO: Got endpoints: latency-svc-t9wjk [2.690155096s]
Jan 26 12:39:54.708: INFO: Created: latency-svc-xj6xw
Jan 26 12:39:54.721: INFO: Got endpoints: latency-svc-xj6xw [2.466955584s]
Jan 26 12:39:54.776: INFO: Created: latency-svc-vdg58
Jan 26 12:39:55.057: INFO: Got endpoints: latency-svc-vdg58 [2.710722689s]
Jan 26 12:39:55.104: INFO: Created: latency-svc-j4fm2
Jan 26 12:39:55.256: INFO: Got endpoints: latency-svc-j4fm2 [2.666879202s]
Jan 26 12:39:55.281: INFO: Created: latency-svc-sfph2
Jan 26 12:39:55.292: INFO: Got endpoints: latency-svc-sfph2 [2.525214766s]
Jan 26 12:39:55.335: INFO: Created: latency-svc-nlh2c
Jan 26 12:39:55.410: INFO: Got endpoints: latency-svc-nlh2c [2.428254005s]
Jan 26 12:39:55.437: INFO: Created: latency-svc-cc6ws
Jan 26 12:39:55.477: INFO: Got endpoints: latency-svc-cc6ws [2.302506161s]
Jan 26 12:39:55.495: INFO: Created: latency-svc-zwct7
Jan 26 12:39:55.497: INFO: Got endpoints: latency-svc-zwct7 [2.114161811s]
Jan 26 12:39:55.608: INFO: Created: latency-svc-cq97l
Jan 26 12:39:55.625: INFO: Got endpoints: latency-svc-cq97l [2.198922657s]
Jan 26 12:39:55.676: INFO: Created: latency-svc-kpfhv
Jan 26 12:39:55.689: INFO: Got endpoints: latency-svc-kpfhv [2.131512929s]
Jan 26 12:39:55.783: INFO: Created: latency-svc-f99tq
Jan 26 12:39:55.783: INFO: Got endpoints: latency-svc-f99tq [1.91820113s]
Jan 26 12:39:55.819: INFO: Created: latency-svc-8kd9j
Jan 26 12:39:55.831: INFO: Got endpoints: latency-svc-8kd9j [1.789603237s]
Jan 26 12:39:55.942: INFO: Created: latency-svc-wvn22
Jan 26 12:39:55.955: INFO: Got endpoints: latency-svc-wvn22 [1.848002542s]
Jan 26 12:39:55.988: INFO: Created: latency-svc-vgphh
Jan 26 12:39:55.997: INFO: Got endpoints: latency-svc-vgphh [1.657034557s]
Jan 26 12:39:56.089: INFO: Created: latency-svc-tmxgc
Jan 26 12:39:56.098: INFO: Got endpoints: latency-svc-tmxgc [1.591674965s]
Jan 26 12:39:56.157: INFO: Created: latency-svc-kr9wh
Jan 26 12:39:56.163: INFO: Got endpoints: latency-svc-kr9wh [1.469266296s]
Jan 26 12:39:56.402: INFO: Created: latency-svc-tthll
Jan 26 12:39:56.428: INFO: Got endpoints: latency-svc-tthll [1.706588498s]
Jan 26 12:39:56.496: INFO: Created: latency-svc-hnnfl
Jan 26 12:39:56.576: INFO: Got endpoints: latency-svc-hnnfl [1.519318603s]
Jan 26 12:39:56.621: INFO: Created: latency-svc-nqvs4
Jan 26 12:39:56.662: INFO: Got endpoints: latency-svc-nqvs4 [1.405238845s]
Jan 26 12:39:56.768: INFO: Created: latency-svc-dfh2p
Jan 26 12:39:56.789: INFO: Got endpoints: latency-svc-dfh2p [1.497339824s]
Jan 26 12:39:56.877: INFO: Created: latency-svc-6bz59
Jan 26 12:39:57.026: INFO: Got endpoints: latency-svc-6bz59 [1.615524027s]
Jan 26 12:39:57.052: INFO: Created: latency-svc-mcbtc
Jan 26 12:39:57.068: INFO: Got endpoints: latency-svc-mcbtc [1.589805772s]
Jan 26 12:39:57.123: INFO: Created: latency-svc-qlvnx
Jan 26 12:39:57.201: INFO: Got endpoints: latency-svc-qlvnx [1.704610666s]
Jan 26 12:39:57.232: INFO: Created: latency-svc-vdd7r
Jan 26 12:39:57.276: INFO: Got endpoints: latency-svc-vdd7r [1.650576963s]
Jan 26 12:39:57.377: INFO: Created: latency-svc-b59vf
Jan 26 12:39:57.401: INFO: Got endpoints: latency-svc-b59vf [1.712074355s]
Jan 26 12:39:57.471: INFO: Created: latency-svc-wgrjj
Jan 26 12:39:57.713: INFO: Got endpoints: latency-svc-wgrjj [1.92998822s]
Jan 26 12:39:57.773: INFO: Created: latency-svc-phfpn
Jan 26 12:39:57.893: INFO: Got endpoints: latency-svc-phfpn [2.061508424s]
Jan 26 12:39:58.340: INFO: Created: latency-svc-sjpk8
Jan 26 12:39:58.371: INFO: Created: latency-svc-sgdj9
Jan 26 12:39:58.372: INFO: Got endpoints: latency-svc-sjpk8 [2.416239294s]
Jan 26 12:39:58.381: INFO: Got endpoints: latency-svc-sgdj9 [2.383397318s]
Jan 26 12:39:58.646: INFO: Created: latency-svc-hj7rh
Jan 26 12:39:58.665: INFO: Got endpoints: latency-svc-hj7rh [2.566292089s]
Jan 26 12:39:59.040: INFO: Created: latency-svc-2lvh7
Jan 26 12:39:59.388: INFO: Got endpoints: latency-svc-2lvh7 [3.224843663s]
Jan 26 12:39:59.485: INFO: Created: latency-svc-zszp9
Jan 26 12:39:59.588: INFO: Got endpoints: latency-svc-zszp9 [3.160324962s]
Jan 26 12:39:59.790: INFO: Created: latency-svc-tbgtp
Jan 26 12:39:59.803: INFO: Got endpoints: latency-svc-tbgtp [3.227228562s]
Jan 26 12:39:59.871: INFO: Created: latency-svc-dkgpt
Jan 26 12:39:59.950: INFO: Got endpoints: latency-svc-dkgpt [3.288194733s]
Jan 26 12:39:59.999: INFO: Created: latency-svc-kdgt4
Jan 26 12:40:00.005: INFO: Got endpoints: latency-svc-kdgt4 [3.215486446s]
Jan 26 12:40:00.184: INFO: Created: latency-svc-z8bx4
Jan 26 12:40:00.201: INFO: Got endpoints: latency-svc-z8bx4 [3.174546785s]
Jan 26 12:40:00.272: INFO: Created: latency-svc-qk77r
Jan 26 12:40:00.353: INFO: Got endpoints: latency-svc-qk77r [3.285344144s]
Jan 26 12:40:00.392: INFO: Created: latency-svc-htf6p
Jan 26 12:40:00.397: INFO: Got endpoints: latency-svc-htf6p [3.19592432s]
Jan 26 12:40:00.608: INFO: Created: latency-svc-dtxpd
Jan 26 12:40:00.771: INFO: Got endpoints: latency-svc-dtxpd [3.495179085s]
Jan 26 12:40:00.789: INFO: Created: latency-svc-fcw5h
Jan 26 12:40:00.811: INFO: Got endpoints: latency-svc-fcw5h [3.409249494s]
Jan 26 12:40:00.979: INFO: Created: latency-svc-rbdvr
Jan 26 12:40:00.995: INFO: Got endpoints: latency-svc-rbdvr [3.28107562s]
Jan 26 12:40:01.189: INFO: Created: latency-svc-7zdpb
Jan 26 12:40:01.199: INFO: Got endpoints: latency-svc-7zdpb [3.305634479s]
Jan 26 12:40:01.414: INFO: Created: latency-svc-2jkmw
Jan 26 12:40:01.441: INFO: Got endpoints: latency-svc-2jkmw [3.068908436s]
Jan 26 12:40:01.570: INFO: Created: latency-svc-nncj4
Jan 26 12:40:01.591: INFO: Got endpoints: latency-svc-nncj4 [3.209745433s]
Jan 26 12:40:01.643: INFO: Created: latency-svc-6nd8x
Jan 26 12:40:01.660: INFO: Got endpoints: latency-svc-6nd8x [2.994908091s]
Jan 26 12:40:01.822: INFO: Created: latency-svc-mxq25
Jan 26 12:40:01.860: INFO: Got endpoints: latency-svc-mxq25 [2.471614458s]
Jan 26 12:40:02.191: INFO: Created: latency-svc-rw5wp
Jan 26 12:40:02.386: INFO: Got endpoints: latency-svc-rw5wp [2.797524119s]
Jan 26 12:40:02.414: INFO: Created: latency-svc-q6269
Jan 26 12:40:02.489: INFO: Got endpoints: latency-svc-q6269 [2.685048214s]
Jan 26 12:40:02.635: INFO: Created: latency-svc-9slqb
Jan 26 12:40:02.761: INFO: Got endpoints: latency-svc-9slqb [2.810907194s]
Jan 26 12:40:02.798: INFO: Created: latency-svc-45fm8
Jan 26 12:40:02.804: INFO: Got endpoints: latency-svc-45fm8 [2.799278827s]
Jan 26 12:40:02.844: INFO: Created: latency-svc-g7jhh
Jan 26 12:40:02.920: INFO: Got endpoints: latency-svc-g7jhh [2.718563014s]
Jan 26 12:40:02.947: INFO: Created: latency-svc-fg9hl
Jan 26 12:40:02.957: INFO: Got endpoints: latency-svc-fg9hl [2.603528106s]
Jan 26 12:40:03.017: INFO: Created: latency-svc-kc4wv
Jan 26 12:40:03.168: INFO: Got endpoints: latency-svc-kc4wv [2.770750216s]
Jan 26 12:40:03.192: INFO: Created: latency-svc-n92vx
Jan 26 12:40:03.209: INFO: Got endpoints: latency-svc-n92vx [2.437097473s]
Jan 26 12:40:03.353: INFO: Created: latency-svc-js2hw
Jan 26 12:40:03.357: INFO: Got endpoints: latency-svc-js2hw [2.546079935s]
Jan 26 12:40:03.595: INFO: Created: latency-svc-gp6rh
Jan 26 12:40:03.648: INFO: Got endpoints: latency-svc-gp6rh [2.652930648s]
Jan 26 12:40:03.806: INFO: Created: latency-svc-b25l6
Jan 26 12:40:03.813: INFO: Got endpoints: latency-svc-b25l6 [2.614144912s]
Jan 26 12:40:03.958: INFO: Created: latency-svc-ddzrm
Jan 26 12:40:03.983: INFO: Got endpoints: latency-svc-ddzrm [2.542114746s]
Jan 26 12:40:04.104: INFO: Created: latency-svc-qqx8h
Jan 26 12:40:04.118: INFO: Got endpoints: latency-svc-qqx8h [2.527014237s]
Jan 26 12:40:04.173: INFO: Created: latency-svc-p29dr
Jan 26 12:40:04.339: INFO: Got endpoints: latency-svc-p29dr [2.679299941s]
Jan 26 12:40:04.379: INFO: Created: latency-svc-9vs9v
Jan 26 12:40:04.384: INFO: Got endpoints: latency-svc-9vs9v [2.523111276s]
Jan 26 12:40:04.537: INFO: Created: latency-svc-8pjtf
Jan 26 12:40:04.604: INFO: Got endpoints: latency-svc-8pjtf [2.218189869s]
Jan 26 12:40:04.621: INFO: Created: latency-svc-52rvd
Jan 26 12:40:04.754: INFO: Got endpoints: latency-svc-52rvd [2.265637432s]
Jan 26 12:40:04.791: INFO: Created: latency-svc-gbmtg
Jan 26 12:40:04.837: INFO: Got endpoints: latency-svc-gbmtg [2.076363278s]
Jan 26 12:40:04.967: INFO: Created: latency-svc-vt9cd
Jan 26 12:40:04.994: INFO: Got endpoints: latency-svc-vt9cd [2.18916366s]
Jan 26 12:40:05.222: INFO: Created: latency-svc-ww668
Jan 26 12:40:05.223: INFO: Got endpoints: latency-svc-ww668 [2.303016287s]
Jan 26 12:40:06.414: INFO: Created: latency-svc-f8qd9
Jan 26 12:40:06.490: INFO: Got endpoints: latency-svc-f8qd9 [3.532900061s]
Jan 26 12:40:06.866: INFO: Created: latency-svc-s4g5b
Jan 26 12:40:06.879: INFO: Got endpoints: latency-svc-s4g5b [3.710984516s]
Jan 26 12:40:07.098: INFO: Created: latency-svc-g2w7m
Jan 26 12:40:07.108: INFO: Got endpoints: latency-svc-g2w7m [3.899251975s]
Jan 26 12:40:07.389: INFO: Created: latency-svc-44h69
Jan 26 12:40:07.418: INFO: Got endpoints: latency-svc-44h69 [4.060699261s]
Jan 26 12:40:07.717: INFO: Created: latency-svc-2zvp2
Jan 26 12:40:07.755: INFO: Got endpoints: latency-svc-2zvp2 [4.107715814s]
Jan 26 12:40:07.920: INFO: Created: latency-svc-bmphn
Jan 26 12:40:07.945: INFO: Got endpoints: latency-svc-bmphn [4.131480465s]
Jan 26 12:40:08.071: INFO: Created: latency-svc-brpnf
Jan 26 12:40:08.092: INFO: Got endpoints: latency-svc-brpnf [4.109386237s]
Jan 26 12:40:08.328: INFO: Created: latency-svc-v5s7b
Jan 26 12:40:08.346: INFO: Got endpoints: latency-svc-v5s7b [4.228221896s]
Jan 26 12:40:08.397: INFO: Created: latency-svc-mzb5c
Jan 26 12:40:08.567: INFO: Got endpoints: latency-svc-mzb5c [4.227915265s]
Jan 26 12:40:08.604: INFO: Created: latency-svc-mw89h
Jan 26 12:40:08.748: INFO: Got endpoints: latency-svc-mw89h [4.36468025s]
Jan 26 12:40:08.809: INFO: Created: latency-svc-6gdwm
Jan 26 12:40:09.000: INFO: Got endpoints: latency-svc-6gdwm [4.395891912s]
Jan 26 12:40:09.227: INFO: Created: latency-svc-ctbx9
Jan 26 12:40:09.239: INFO: Got endpoints: latency-svc-ctbx9 [4.484082021s]
Jan 26 12:40:09.473: INFO: Created: latency-svc-ctctb
Jan 26 12:40:09.511: INFO: Got endpoints: latency-svc-ctctb [4.673273765s]
Jan 26 12:40:09.722: INFO: Created: latency-svc-b4d29
Jan 26 12:40:09.750: INFO: Got endpoints: latency-svc-b4d29 [4.755895674s]
Jan 26 12:40:09.825: INFO: Created: latency-svc-c7lk5
Jan 26 12:40:09.947: INFO: Got endpoints: latency-svc-c7lk5 [4.724449871s]
Jan 26 12:40:09.971: INFO: Created: latency-svc-2vjpl
Jan 26 12:40:09.990: INFO: Got endpoints: latency-svc-2vjpl [3.500545026s]
Jan 26 12:40:10.145: INFO: Created: latency-svc-2h7s2
Jan 26 12:40:10.166: INFO: Got endpoints: latency-svc-2h7s2 [3.286710445s]
Jan 26 12:40:10.236: INFO: Created: latency-svc-l5s49
Jan 26 12:40:10.373: INFO: Got endpoints: latency-svc-l5s49 [3.265371826s]
Jan 26 12:40:10.420: INFO: Created: latency-svc-blzn9
Jan 26 12:40:10.464: INFO: Got endpoints: latency-svc-blzn9 [3.046670295s]
Jan 26 12:40:10.560: INFO: Created: latency-svc-n2lbh
Jan 26 12:40:10.691: INFO: Got endpoints: latency-svc-n2lbh [2.93528082s]
Jan 26 12:40:10.762: INFO: Created: latency-svc-8hk86
Jan 26 12:40:10.782: INFO: Got endpoints: latency-svc-8hk86 [2.837190871s]
Jan 26 12:40:10.971: INFO: Created: latency-svc-lfg6z
Jan 26 12:40:11.010: INFO: Got endpoints: latency-svc-lfg6z [2.917976915s]
Jan 26 12:40:11.166: INFO: Created: latency-svc-rb2t8
Jan 26 12:40:11.203: INFO: Got endpoints: latency-svc-rb2t8 [2.856218129s]
Jan 26 12:40:11.235: INFO: Created: latency-svc-c5qqr
Jan 26 12:40:11.244: INFO: Got endpoints: latency-svc-c5qqr [2.676772727s]
Jan 26 12:40:11.405: INFO: Created: latency-svc-9pd8r
Jan 26 12:40:11.456: INFO: Got endpoints: latency-svc-9pd8r [2.707946356s]
Jan 26 12:40:11.461: INFO: Created: latency-svc-xf75g
Jan 26 12:40:11.566: INFO: Got endpoints: latency-svc-xf75g [2.565031597s]
Jan 26 12:40:11.828: INFO: Created: latency-svc-2hjdv
Jan 26 12:40:11.828: INFO: Got endpoints: latency-svc-2hjdv [2.588941723s]
Jan 26 12:40:12.036: INFO: Created: latency-svc-52pnw
Jan 26 12:40:12.064: INFO: Created: latency-svc-dls5x
Jan 26 12:40:12.078: INFO: Got endpoints: latency-svc-52pnw [2.566622861s]
Jan 26 12:40:12.097: INFO: Got endpoints: latency-svc-dls5x [2.347109956s]
Jan 26 12:40:12.298: INFO: Created: latency-svc-zpjq4
Jan 26 12:40:12.329: INFO: Got endpoints: latency-svc-zpjq4 [2.381398537s]
Jan 26 12:40:12.481: INFO: Created: latency-svc-5l5vh
Jan 26 12:40:12.510: INFO: Got endpoints: latency-svc-5l5vh [2.518941335s]
Jan 26 12:40:12.694: INFO: Created: latency-svc-zpggv
Jan 26 12:40:12.725: INFO: Got endpoints: latency-svc-zpggv [2.558390461s]
Jan 26 12:40:12.827: INFO: Created: latency-svc-dmjmg
Jan 26 12:40:12.843: INFO: Got endpoints: latency-svc-dmjmg [2.469376316s]
Jan 26 12:40:12.984: INFO: Created: latency-svc-djsp5
Jan 26 12:40:13.013: INFO: Got endpoints: latency-svc-djsp5 [2.54864453s]
Jan 26 12:40:13.094: INFO: Created: latency-svc-xj7df
Jan 26 12:40:13.184: INFO: Got endpoints: latency-svc-xj7df [2.492752688s]
Jan 26 12:40:13.189: INFO: Created: latency-svc-cd5mp
Jan 26 12:40:13.224: INFO: Got endpoints: latency-svc-cd5mp [2.442284621s]
Jan 26 12:40:13.403: INFO: Created: latency-svc-pgmpb
Jan 26 12:40:13.414: INFO: Got endpoints: latency-svc-pgmpb [2.403339817s]
Jan 26 12:40:13.489: INFO: Created: latency-svc-z9vqz
Jan 26 12:40:13.615: INFO: Got endpoints: latency-svc-z9vqz [2.41215745s]
Jan 26 12:40:13.639: INFO: Created: latency-svc-kjdzs
Jan 26 12:40:13.671: INFO: Got endpoints: latency-svc-kjdzs [2.426656642s]
Jan 26 12:40:13.920: INFO: Created: latency-svc-599hn
Jan 26 12:40:13.936: INFO: Got endpoints: latency-svc-599hn [2.479163008s]
Jan 26 12:40:14.124: INFO: Created: latency-svc-9qqpc
Jan 26 12:40:14.136: INFO: Got endpoints: latency-svc-9qqpc [2.570277262s]
Jan 26 12:40:14.268: INFO: Created: latency-svc-jmvwb
Jan 26 12:40:14.293: INFO: Got endpoints: latency-svc-jmvwb [2.465265926s]
Jan 26 12:40:14.450: INFO: Created: latency-svc-p4mqc
Jan 26 12:40:14.475: INFO: Got endpoints: latency-svc-p4mqc [2.39664917s]
Jan 26 12:40:14.554: INFO: Created: latency-svc-sfxvx
Jan 26 12:40:14.648: INFO: Got endpoints: latency-svc-sfxvx [2.550827498s]
Jan 26 12:40:14.702: INFO: Created: latency-svc-4n444
Jan 26 12:40:14.814: INFO: Got endpoints: latency-svc-4n444 [2.485119297s]
Jan 26 12:40:14.855: INFO: Created: latency-svc-w4pzn
Jan 26 12:40:14.873: INFO: Got endpoints: latency-svc-w4pzn [2.363332472s]
Jan 26 12:40:14.978: INFO: Created: latency-svc-pmgtz
Jan 26 12:40:14.991: INFO: Got endpoints: latency-svc-pmgtz [2.265584134s]
Jan 26 12:40:15.047: INFO: Created: latency-svc-wr5zn
Jan 26 12:40:15.065: INFO: Got endpoints: latency-svc-wr5zn [2.22198291s]
Jan 26 12:40:15.217: INFO: Created: latency-svc-hbm2z
Jan 26 12:40:15.267: INFO: Got endpoints: latency-svc-hbm2z [2.253249731s]
Jan 26 12:40:15.309: INFO: Created: latency-svc-z999b
Jan 26 12:40:15.332: INFO: Got endpoints: latency-svc-z999b [2.148015293s]
Jan 26 12:40:15.332: INFO: Latencies: [265.434492ms 293.976846ms 512.161698ms 581.750475ms 747.000739ms 1.080051029s 1.405238845s 1.469266296s 1.497339824s 1.519318603s 1.589805772s 1.591674965s 1.615524027s 1.650576963s 1.657034557s 1.667924448s 1.704610666s 1.706588498s 1.712074355s 1.789603237s 1.848002542s 1.91820113s 1.92998822s 1.991151309s 2.061508424s 2.071543899s 2.076363278s 2.114161811s 2.131512929s 2.148015293s 2.18916366s 2.198922657s 2.218189869s 2.22198291s 2.253249731s 2.265584134s 2.265637432s 2.302506161s 2.303016287s 2.312188203s 2.347109956s 2.363332472s 2.381398537s 2.383397318s 2.388972463s 2.392179869s 2.39664917s 2.403339817s 2.41215745s 2.416239294s 2.426656642s 2.428254005s 2.431486723s 2.437097473s 2.442284621s 2.459201505s 2.465265926s 2.466955584s 2.469376316s 2.471614458s 2.479163008s 2.479661078s 2.485119297s 2.492752688s 2.518941335s 2.523111276s 2.525214766s 2.527014237s 2.530030325s 2.542114746s 2.546079935s 2.54864453s 2.550827498s 2.552365326s 2.558390461s 2.558397515s 2.565031597s 2.566292089s 2.566622861s 2.570277262s 2.577757306s 2.588941723s 2.602973738s 2.603528106s 2.609710447s 2.613567702s 2.614144912s 2.615337743s 2.619795639s 2.619841524s 2.628700774s 2.637463952s 2.643770495s 2.652930648s 2.666879202s 2.676772727s 2.679299941s 2.685048214s 2.690155096s 2.699981838s 2.707946356s 2.710722689s 2.718563014s 2.752613981s 2.766058368s 2.770750216s 2.776003959s 2.778079641s 2.797524119s 2.799278827s 2.810907194s 2.815538218s 2.818448815s 2.837190871s 2.839861888s 2.854672994s 2.856218129s 2.866629726s 2.910101308s 2.917976915s 2.927874263s 2.932460671s 2.93528082s 2.994908091s 2.996293899s 2.9984907s 2.999146921s 3.000019761s 3.004697301s 3.012660595s 3.046670295s 3.068908436s 3.081840729s 3.085326994s 3.116082565s 3.116299997s 3.121778698s 3.139904317s 3.140027707s 3.15814486s 3.160324962s 3.162687714s 3.165872276s 3.174546785s 3.192808413s 3.19592432s 3.209745433s 3.215486446s 3.216156731s 3.224843663s 3.225418405s 3.227228562s 3.229979002s 3.265371826s 3.28107562s 3.285344144s 3.286710445s 3.288194733s 3.305634479s 3.313242814s 3.31364076s 3.344697902s 3.366228927s 3.409249494s 3.413966135s 3.475357247s 3.495179085s 3.500545026s 3.511576337s 3.532900061s 3.537047669s 3.551823202s 3.586633936s 3.58945591s 3.631607539s 3.644640134s 3.672877027s 3.710984516s 3.715354748s 3.736683261s 3.783626702s 3.833623515s 3.84727982s 3.861680804s 3.899251975s 3.91384382s 3.914115532s 4.060699261s 4.071490205s 4.107715814s 4.109386237s 4.131480465s 4.227915265s 4.228221896s 4.36468025s 4.395891912s 4.484082021s 4.673273765s 4.724449871s 4.755895674s]
Jan 26 12:40:15.332: INFO: 50 %ile: 2.707946356s
Jan 26 12:40:15.332: INFO: 90 %ile: 3.783626702s
Jan 26 12:40:15.332: INFO: 99 %ile: 4.724449871s
Jan 26 12:40:15.332: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:40:15.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-sszrq" for this suite.
Jan 26 12:41:31.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:41:31.688: INFO: namespace: e2e-tests-svc-latency-sszrq, resource: bindings, ignored listing per whitelist
Jan 26 12:41:32.012: INFO: namespace e2e-tests-svc-latency-sszrq deletion completed in 1m16.666646447s

• [SLOW TEST:124.314 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:41:32.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 12:41:33.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-k4pw4'
Jan 26 12:41:35.023: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 12:41:35.024: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 26 12:41:35.131: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 26 12:41:35.189: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 26 12:41:35.276: INFO: scanned /root for discovery docs: 
Jan 26 12:41:35.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-k4pw4'
Jan 26 12:42:02.303: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 26 12:42:02.303: INFO: stdout: "Created e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5\nScaling up e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 26 12:42:02.303: INFO: stdout: "Created e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5\nScaling up e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 26 12:42:02.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-k4pw4'
Jan 26 12:42:02.493: INFO: stderr: ""
Jan 26 12:42:02.493: INFO: stdout: "e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5-g8zjs e2e-test-nginx-rc-v47fm "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 26 12:42:07.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-k4pw4'
Jan 26 12:42:07.705: INFO: stderr: ""
Jan 26 12:42:07.705: INFO: stdout: "e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5-g8zjs "
Jan 26 12:42:07.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5-g8zjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k4pw4'
Jan 26 12:42:07.908: INFO: stderr: ""
Jan 26 12:42:07.908: INFO: stdout: "true"
Jan 26 12:42:07.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5-g8zjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k4pw4'
Jan 26 12:42:08.050: INFO: stderr: ""
Jan 26 12:42:08.050: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 26 12:42:08.050: INFO: e2e-test-nginx-rc-096ee1733ff11305d10b877a0f349fa5-g8zjs is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 26 12:42:08.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-k4pw4'
Jan 26 12:42:08.221: INFO: stderr: ""
Jan 26 12:42:08.221: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:42:08.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k4pw4" for this suite.
Jan 26 12:42:32.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:42:32.404: INFO: namespace: e2e-tests-kubectl-k4pw4, resource: bindings, ignored listing per whitelist
Jan 26 12:42:32.755: INFO: namespace e2e-tests-kubectl-k4pw4 deletion completed in 24.525192074s

• [SLOW TEST:60.742 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:42:32.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 12:42:33.115: INFO: Waiting up to 5m0s for pod "downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-dlkpx" to be "success or failure"
Jan 26 12:42:33.161: INFO: Pod "downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.852541ms
Jan 26 12:42:35.180: INFO: Pod "downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064663409s
Jan 26 12:42:37.189: INFO: Pod "downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073444667s
Jan 26 12:42:39.342: INFO: Pod "downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226827741s
Jan 26 12:42:41.385: INFO: Pod "downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270164071s
Jan 26 12:42:43.403: INFO: Pod "downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.287428606s
STEP: Saw pod success
Jan 26 12:42:43.403: INFO: Pod "downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:42:43.406: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 12:42:44.033: INFO: Waiting for pod downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005 to disappear
Jan 26 12:42:44.363: INFO: Pod downwardapi-volume-528a9222-4039-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:42:44.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dlkpx" for this suite.
Jan 26 12:42:50.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:42:50.679: INFO: namespace: e2e-tests-downward-api-dlkpx, resource: bindings, ignored listing per whitelist
Jan 26 12:42:50.710: INFO: namespace e2e-tests-downward-api-dlkpx deletion completed in 6.327274529s

• [SLOW TEST:17.954 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:42:50.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 26 12:42:50.875: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-a,UID:5d2c0442-4039-11ea-a994-fa163e34d433,ResourceVersion:19525626,Generation:0,CreationTimestamp:2020-01-26 12:42:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 26 12:42:50.875: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-a,UID:5d2c0442-4039-11ea-a994-fa163e34d433,ResourceVersion:19525626,Generation:0,CreationTimestamp:2020-01-26 12:42:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 26 12:43:00.912: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-a,UID:5d2c0442-4039-11ea-a994-fa163e34d433,ResourceVersion:19525639,Generation:0,CreationTimestamp:2020-01-26 12:42:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 26 12:43:00.913: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-a,UID:5d2c0442-4039-11ea-a994-fa163e34d433,ResourceVersion:19525639,Generation:0,CreationTimestamp:2020-01-26 12:42:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 26 12:43:10.940: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-a,UID:5d2c0442-4039-11ea-a994-fa163e34d433,ResourceVersion:19525652,Generation:0,CreationTimestamp:2020-01-26 12:42:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 12:43:10.940: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-a,UID:5d2c0442-4039-11ea-a994-fa163e34d433,ResourceVersion:19525652,Generation:0,CreationTimestamp:2020-01-26 12:42:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 26 12:43:20.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-a,UID:5d2c0442-4039-11ea-a994-fa163e34d433,ResourceVersion:19525665,Generation:0,CreationTimestamp:2020-01-26 12:42:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 12:43:20.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-a,UID:5d2c0442-4039-11ea-a994-fa163e34d433,ResourceVersion:19525665,Generation:0,CreationTimestamp:2020-01-26 12:42:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 26 12:43:31.002: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-b,UID:751374d7-4039-11ea-a994-fa163e34d433,ResourceVersion:19525678,Generation:0,CreationTimestamp:2020-01-26 12:43:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 26 12:43:31.002: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-b,UID:751374d7-4039-11ea-a994-fa163e34d433,ResourceVersion:19525678,Generation:0,CreationTimestamp:2020-01-26 12:43:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 26 12:43:41.026: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-b,UID:751374d7-4039-11ea-a994-fa163e34d433,ResourceVersion:19525691,Generation:0,CreationTimestamp:2020-01-26 12:43:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 26 12:43:41.026: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b7c66,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7c66/configmaps/e2e-watch-test-configmap-b,UID:751374d7-4039-11ea-a994-fa163e34d433,ResourceVersion:19525691,Generation:0,CreationTimestamp:2020-01-26 12:43:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:43:51.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-b7c66" for this suite.
Jan 26 12:43:57.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:43:57.665: INFO: namespace: e2e-tests-watch-b7c66, resource: bindings, ignored listing per whitelist
Jan 26 12:43:57.754: INFO: namespace e2e-tests-watch-b7c66 deletion completed in 6.227957219s

• [SLOW TEST:67.045 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:43:57.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0126 12:44:09.791749       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 12:44:09.791: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:44:09.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4rzpr" for this suite.
Jan 26 12:44:35.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:44:35.993: INFO: namespace: e2e-tests-gc-4rzpr, resource: bindings, ignored listing per whitelist
Jan 26 12:44:36.121: INFO: namespace e2e-tests-gc-4rzpr deletion completed in 26.320350063s

• [SLOW TEST:38.366 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:44:36.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 26 12:44:36.288: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 26 12:44:36.296: INFO: Waiting for terminating namespaces to be deleted...
Jan 26 12:44:36.299: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 26 12:44:36.320: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 26 12:44:36.320: INFO: 	Container coredns ready: true, restart count 0
Jan 26 12:44:36.320: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 12:44:36.320: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 12:44:36.320: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 12:44:36.320: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 26 12:44:36.320: INFO: 	Container coredns ready: true, restart count 0
Jan 26 12:44:36.320: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 26 12:44:36.320: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 12:44:36.320: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 26 12:44:36.320: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 26 12:44:36.320: INFO: 	Container weave ready: true, restart count 0
Jan 26 12:44:36.320: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-a0f1bfb8-4039-11ea-b664-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-a0f1bfb8-4039-11ea-b664-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-a0f1bfb8-4039-11ea-b664-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:44:56.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-wj76d" for this suite.
Jan 26 12:45:10.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:45:10.937: INFO: namespace: e2e-tests-sched-pred-wj76d, resource: bindings, ignored listing per whitelist
Jan 26 12:45:11.029: INFO: namespace e2e-tests-sched-pred-wj76d deletion completed in 14.188788707s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:34.908 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:45:11.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 26 12:45:11.218: INFO: Waiting up to 5m0s for pod "pod-b0d0d427-4039-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-f2pvv" to be "success or failure"
Jan 26 12:45:11.310: INFO: Pod "pod-b0d0d427-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.336552ms
Jan 26 12:45:13.497: INFO: Pod "pod-b0d0d427-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278433199s
Jan 26 12:45:15.527: INFO: Pod "pod-b0d0d427-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308270178s
Jan 26 12:45:17.538: INFO: Pod "pod-b0d0d427-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.319679042s
Jan 26 12:45:19.572: INFO: Pod "pod-b0d0d427-4039-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.353520283s
STEP: Saw pod success
Jan 26 12:45:19.572: INFO: Pod "pod-b0d0d427-4039-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:45:19.579: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b0d0d427-4039-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 12:45:19.874: INFO: Waiting for pod pod-b0d0d427-4039-11ea-b664-0242ac110005 to disappear
Jan 26 12:45:19.897: INFO: Pod pod-b0d0d427-4039-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:45:19.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-f2pvv" for this suite.
Jan 26 12:45:26.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:45:26.286: INFO: namespace: e2e-tests-emptydir-f2pvv, resource: bindings, ignored listing per whitelist
Jan 26 12:45:26.307: INFO: namespace e2e-tests-emptydir-f2pvv deletion completed in 6.327878962s

• [SLOW TEST:15.277 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:45:26.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b9f512b6-4039-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 12:45:26.650: INFO: Waiting up to 5m0s for pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-64bv6" to be "success or failure"
Jan 26 12:45:26.697: INFO: Pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.913849ms
Jan 26 12:45:28.711: INFO: Pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060637863s
Jan 26 12:45:30.720: INFO: Pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069543509s
Jan 26 12:45:32.757: INFO: Pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106643817s
Jan 26 12:45:35.198: INFO: Pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547862557s
Jan 26 12:45:37.232: INFO: Pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.582346736s
Jan 26 12:45:39.253: INFO: Pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.603099571s
STEP: Saw pod success
Jan 26 12:45:39.253: INFO: Pod "pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:45:39.262: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 26 12:45:39.436: INFO: Waiting for pod pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005 to disappear
Jan 26 12:45:39.453: INFO: Pod pod-secrets-b9f7cd90-4039-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:45:39.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-64bv6" for this suite.
Jan 26 12:45:45.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:45:45.564: INFO: namespace: e2e-tests-secrets-64bv6, resource: bindings, ignored listing per whitelist
Jan 26 12:45:45.789: INFO: namespace e2e-tests-secrets-64bv6 deletion completed in 6.325387285s

• [SLOW TEST:19.480 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:45:45.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 26 12:45:46.024: INFO: Waiting up to 5m0s for pod "downward-api-c58fb6f4-4039-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-ch9pb" to be "success or failure"
Jan 26 12:45:46.039: INFO: Pod "downward-api-c58fb6f4-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.512282ms
Jan 26 12:45:48.124: INFO: Pod "downward-api-c58fb6f4-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099638964s
Jan 26 12:45:50.151: INFO: Pod "downward-api-c58fb6f4-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126717657s
Jan 26 12:45:52.180: INFO: Pod "downward-api-c58fb6f4-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155852739s
Jan 26 12:45:54.211: INFO: Pod "downward-api-c58fb6f4-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186767277s
Jan 26 12:45:56.232: INFO: Pod "downward-api-c58fb6f4-4039-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.208104131s
STEP: Saw pod success
Jan 26 12:45:56.232: INFO: Pod "downward-api-c58fb6f4-4039-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:45:56.244: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c58fb6f4-4039-11ea-b664-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 26 12:45:56.375: INFO: Waiting for pod downward-api-c58fb6f4-4039-11ea-b664-0242ac110005 to disappear
Jan 26 12:45:56.432: INFO: Pod downward-api-c58fb6f4-4039-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:45:56.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ch9pb" for this suite.
Jan 26 12:46:02.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:46:02.574: INFO: namespace: e2e-tests-downward-api-ch9pb, resource: bindings, ignored listing per whitelist
Jan 26 12:46:02.682: INFO: namespace e2e-tests-downward-api-ch9pb deletion completed in 6.239326714s

• [SLOW TEST:16.893 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:46:02.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-cfb9ffb2-4039-11ea-b664-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:46:15.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pdfp2" for this suite.
Jan 26 12:46:39.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:46:39.450: INFO: namespace: e2e-tests-configmap-pdfp2, resource: bindings, ignored listing per whitelist
Jan 26 12:46:39.455: INFO: namespace e2e-tests-configmap-pdfp2 deletion completed in 24.264059439s

• [SLOW TEST:36.773 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:46:39.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 26 12:46:39.731: INFO: Waiting up to 5m0s for pod "pod-e592a419-4039-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-8qdbh" to be "success or failure"
Jan 26 12:46:39.783: INFO: Pod "pod-e592a419-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.801502ms
Jan 26 12:46:41.810: INFO: Pod "pod-e592a419-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078609802s
Jan 26 12:46:43.839: INFO: Pod "pod-e592a419-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107332364s
Jan 26 12:46:45.952: INFO: Pod "pod-e592a419-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22014446s
Jan 26 12:46:47.984: INFO: Pod "pod-e592a419-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252640921s
Jan 26 12:46:49.995: INFO: Pod "pod-e592a419-4039-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.263394867s
STEP: Saw pod success
Jan 26 12:46:49.995: INFO: Pod "pod-e592a419-4039-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:46:49.998: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e592a419-4039-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 12:46:50.139: INFO: Waiting for pod pod-e592a419-4039-11ea-b664-0242ac110005 to disappear
Jan 26 12:46:50.149: INFO: Pod pod-e592a419-4039-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:46:50.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8qdbh" for this suite.
Jan 26 12:46:56.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:46:56.338: INFO: namespace: e2e-tests-emptydir-8qdbh, resource: bindings, ignored listing per whitelist
Jan 26 12:46:56.424: INFO: namespace e2e-tests-emptydir-8qdbh deletion completed in 6.268024379s

• [SLOW TEST:16.968 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:46:56.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 12:46:56.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-tdvf5" to be "success or failure"
Jan 26 12:46:57.029: INFO: Pod "downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 145.333258ms
Jan 26 12:46:59.285: INFO: Pod "downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401937426s
Jan 26 12:47:01.305: INFO: Pod "downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.421215679s
Jan 26 12:47:03.395: INFO: Pod "downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.511523623s
Jan 26 12:47:05.408: INFO: Pod "downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524972577s
Jan 26 12:47:07.422: INFO: Pod "downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.538904869s
STEP: Saw pod success
Jan 26 12:47:07.422: INFO: Pod "downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:47:07.443: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 12:47:08.317: INFO: Waiting for pod downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005 to disappear
Jan 26 12:47:08.342: INFO: Pod downwardapi-volume-efcbe1ab-4039-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:47:08.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tdvf5" for this suite.
Jan 26 12:47:14.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:47:14.435: INFO: namespace: e2e-tests-downward-api-tdvf5, resource: bindings, ignored listing per whitelist
Jan 26 12:47:14.555: INFO: namespace e2e-tests-downward-api-tdvf5 deletion completed in 6.202998534s

• [SLOW TEST:18.131 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:47:14.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-c7v5t in namespace e2e-tests-proxy-zffrj
I0126 12:47:14.870222       8 runners.go:184] Created replication controller with name: proxy-service-c7v5t, namespace: e2e-tests-proxy-zffrj, replica count: 1
I0126 12:47:15.921070       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:47:16.921516       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:47:17.921805       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:47:18.922258       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:47:19.922573       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:47:20.922856       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:47:21.923085       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:47:22.923369       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 12:47:23.923889       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 12:47:24.924495       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 12:47:25.924864       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 12:47:26.925488       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 12:47:27.925772       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 12:47:28.926042       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 12:47:29.926355       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 12:47:30.926827       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 12:47:31.927444       8 runners.go:184] proxy-service-c7v5t Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 26 12:47:32.017: INFO: setup took 17.206301884s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 26 12:47:32.047: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-zffrj/pods/http:proxy-service-c7v5t-mv66m:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0126 12:47:48.362333       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 12:47:48.362: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:47:48.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-9dz96" for this suite.
Jan 26 12:47:54.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:47:54.682: INFO: namespace: e2e-tests-gc-9dz96, resource: bindings, ignored listing per whitelist
Jan 26 12:47:54.744: INFO: namespace e2e-tests-gc-9dz96 deletion completed in 6.370974372s

• [SLOW TEST:9.210 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:47:54.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:47:55.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-l24pd" for this suite.
Jan 26 12:48:01.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:48:01.163: INFO: namespace: e2e-tests-services-l24pd, resource: bindings, ignored listing per whitelist
Jan 26 12:48:01.303: INFO: namespace e2e-tests-services-l24pd deletion completed in 6.198183253s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.559 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:48:01.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-164c7942-403a-11ea-b664-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 26 12:48:01.525: INFO: Waiting up to 5m0s for pod "pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005" in namespace "e2e-tests-configmap-xrt6w" to be "success or failure"
Jan 26 12:48:01.536: INFO: Pod "pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.256631ms
Jan 26 12:48:03.633: INFO: Pod "pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107816749s
Jan 26 12:48:05.660: INFO: Pod "pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134576939s
Jan 26 12:48:07.805: INFO: Pod "pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279909991s
Jan 26 12:48:09.814: INFO: Pod "pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288212388s
Jan 26 12:48:11.833: INFO: Pod "pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.307498108s
STEP: Saw pod success
Jan 26 12:48:11.833: INFO: Pod "pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:48:11.848: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 26 12:48:12.038: INFO: Waiting for pod pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005 to disappear
Jan 26 12:48:13.176: INFO: Pod pod-configmaps-164d46ab-403a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:48:13.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xrt6w" for this suite.
Jan 26 12:48:19.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:48:19.673: INFO: namespace: e2e-tests-configmap-xrt6w, resource: bindings, ignored listing per whitelist
Jan 26 12:48:19.687: INFO: namespace e2e-tests-configmap-xrt6w deletion completed in 6.476904231s

• [SLOW TEST:18.383 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:48:19.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-21519057-403a-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 12:48:20.053: INFO: Waiting up to 5m0s for pod "pod-secrets-21542a4f-403a-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-4548p" to be "success or failure"
Jan 26 12:48:20.079: INFO: Pod "pod-secrets-21542a4f-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.337045ms
Jan 26 12:48:22.184: INFO: Pod "pod-secrets-21542a4f-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130632661s
Jan 26 12:48:24.207: INFO: Pod "pod-secrets-21542a4f-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153765191s
Jan 26 12:48:26.228: INFO: Pod "pod-secrets-21542a4f-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174058366s
Jan 26 12:48:28.614: INFO: Pod "pod-secrets-21542a4f-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.560379167s
Jan 26 12:48:30.748: INFO: Pod "pod-secrets-21542a4f-403a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.694158096s
STEP: Saw pod success
Jan 26 12:48:30.748: INFO: Pod "pod-secrets-21542a4f-403a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:48:30.755: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-21542a4f-403a-11ea-b664-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 26 12:48:31.161: INFO: Waiting for pod pod-secrets-21542a4f-403a-11ea-b664-0242ac110005 to disappear
Jan 26 12:48:31.166: INFO: Pod pod-secrets-21542a4f-403a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:48:31.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4548p" for this suite.
Jan 26 12:48:37.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:48:37.356: INFO: namespace: e2e-tests-secrets-4548p, resource: bindings, ignored listing per whitelist
Jan 26 12:48:37.476: INFO: namespace e2e-tests-secrets-4548p deletion completed in 6.303624222s

• [SLOW TEST:17.789 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:48:37.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 26 12:48:37.675: INFO: Waiting up to 5m0s for pod "pod-2be01dff-403a-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-h5m4p" to be "success or failure"
Jan 26 12:48:37.692: INFO: Pod "pod-2be01dff-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.271545ms
Jan 26 12:48:39.723: INFO: Pod "pod-2be01dff-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047918088s
Jan 26 12:48:41.735: INFO: Pod "pod-2be01dff-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059957363s
Jan 26 12:48:44.131: INFO: Pod "pod-2be01dff-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455771345s
Jan 26 12:48:46.151: INFO: Pod "pod-2be01dff-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476019462s
Jan 26 12:48:48.176: INFO: Pod "pod-2be01dff-403a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.501172171s
STEP: Saw pod success
Jan 26 12:48:48.176: INFO: Pod "pod-2be01dff-403a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:48:48.188: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2be01dff-403a-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 12:48:48.317: INFO: Waiting for pod pod-2be01dff-403a-11ea-b664-0242ac110005 to disappear
Jan 26 12:48:48.397: INFO: Pod pod-2be01dff-403a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:48:48.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h5m4p" for this suite.
Jan 26 12:48:54.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:48:54.665: INFO: namespace: e2e-tests-emptydir-h5m4p, resource: bindings, ignored listing per whitelist
Jan 26 12:48:54.677: INFO: namespace e2e-tests-emptydir-h5m4p deletion completed in 6.264663567s

• [SLOW TEST:17.201 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:48:54.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 26 12:48:55.438: INFO: created pod pod-service-account-defaultsa
Jan 26 12:48:55.438: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 26 12:48:55.455: INFO: created pod pod-service-account-mountsa
Jan 26 12:48:55.455: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 26 12:48:55.587: INFO: created pod pod-service-account-nomountsa
Jan 26 12:48:55.587: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 26 12:48:55.651: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 26 12:48:55.651: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 26 12:48:55.774: INFO: created pod pod-service-account-mountsa-mountspec
Jan 26 12:48:55.775: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 26 12:48:55.940: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 26 12:48:55.940: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 26 12:48:56.002: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 26 12:48:56.003: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 26 12:48:56.354: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 26 12:48:56.355: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 26 12:48:57.207: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 26 12:48:57.207: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:48:57.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-4z6c4" for this suite.
Jan 26 12:49:25.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:49:25.622: INFO: namespace: e2e-tests-svcaccounts-4z6c4, resource: bindings, ignored listing per whitelist
Jan 26 12:49:25.711: INFO: namespace e2e-tests-svcaccounts-4z6c4 deletion completed in 26.906072849s

• [SLOW TEST:31.034 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:49:25.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 12:49:25.874: INFO: Creating deployment "nginx-deployment"
Jan 26 12:49:25.885: INFO: Waiting for observed generation 1
Jan 26 12:49:28.884: INFO: Waiting for all required pods to come up
Jan 26 12:49:28.905: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 26 12:50:10.646: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 26 12:50:10.671: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 26 12:50:10.710: INFO: Updating deployment nginx-deployment
Jan 26 12:50:10.710: INFO: Waiting for observed generation 2
Jan 26 12:50:12.746: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 26 12:50:12.751: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 26 12:50:12.754: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 26 12:50:12.769: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 26 12:50:12.769: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 26 12:50:12.773: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 26 12:50:12.781: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 26 12:50:12.781: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 26 12:50:12.794: INFO: Updating deployment nginx-deployment
Jan 26 12:50:12.794: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 26 12:50:15.202: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 26 12:50:15.216: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 26 12:50:16.009: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tx8k5/deployments/nginx-deployment,UID:489d32a9-403a-11ea-a994-fa163e34d433,ResourceVersion:19526891,Generation:3,CreationTimestamp:2020-01-26 12:49:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Available True 2020-01-26 12:50:08 +0000 UTC 2020-01-26 12:50:08 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-26 12:50:11 +0000 UTC 2020-01-26 12:49:25 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 26 12:50:16.434: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tx8k5/replicasets/nginx-deployment-5c98f8fb5,UID:6358a28f-403a-11ea-a994-fa163e34d433,ResourceVersion:19526896,Generation:3,CreationTimestamp:2020-01-26 12:50:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 489d32a9-403a-11ea-a994-fa163e34d433 0xc00266b6a7 0xc00266b6a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 26 12:50:16.434: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 26 12:50:16.435: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tx8k5/replicasets/nginx-deployment-85ddf47c5d,UID:48a33d99-403a-11ea-a994-fa163e34d433,ResourceVersion:19526892,Generation:3,CreationTimestamp:2020-01-26 12:49:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 489d32a9-403a-11ea-a994-fa163e34d433 0xc00266ba27 0xc00266ba28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 26 12:50:16.513: INFO: Pod "nginx-deployment-5c98f8fb5-2fls7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2fls7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-5c98f8fb5-2fls7,UID:667c8221-403a-11ea-a994-fa163e34d433,ResourceVersion:19526901,Generation:0,CreationTimestamp:2020-01-26 12:50:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6358a28f-403a-11ea-a994-fa163e34d433 0xc00232c627 0xc00232c628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232c690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232c6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.514: INFO: Pod "nginx-deployment-5c98f8fb5-47d8h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-47d8h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-5c98f8fb5-47d8h,UID:635f5846-403a-11ea-a994-fa163e34d433,ResourceVersion:19526871,Generation:0,CreationTimestamp:2020-01-26 12:50:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6358a28f-403a-11ea-a994-fa163e34d433 0xc00232c710 0xc00232c711}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232c780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232c7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-26 12:50:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.514: INFO: Pod "nginx-deployment-5c98f8fb5-bmgck" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bmgck,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-5c98f8fb5-bmgck,UID:6373c6ba-403a-11ea-a994-fa163e34d433,ResourceVersion:19526893,Generation:0,CreationTimestamp:2020-01-26 12:50:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6358a28f-403a-11ea-a994-fa163e34d433 0xc00232c867 0xc00232c868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232c8d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232c8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-26 12:50:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.514: INFO: Pod "nginx-deployment-5c98f8fb5-gss6q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gss6q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-5c98f8fb5-gss6q,UID:639d6ca6-403a-11ea-a994-fa163e34d433,ResourceVersion:19526899,Generation:0,CreationTimestamp:2020-01-26 12:50:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6358a28f-403a-11ea-a994-fa163e34d433 0xc00232cb47 0xc00232cb48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232cc40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232cc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-26 12:50:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.514: INFO: Pod "nginx-deployment-5c98f8fb5-hx5mc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hx5mc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-5c98f8fb5-hx5mc,UID:639488a4-403a-11ea-a994-fa163e34d433,ResourceVersion:19526870,Generation:0,CreationTimestamp:2020-01-26 12:50:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6358a28f-403a-11ea-a994-fa163e34d433 0xc00232cd27 0xc00232cd28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232cd90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232cdb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.515: INFO: Pod "nginx-deployment-5c98f8fb5-nsxjc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nsxjc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-5c98f8fb5-nsxjc,UID:6373efc3-403a-11ea-a994-fa163e34d433,ResourceVersion:19526890,Generation:0,CreationTimestamp:2020-01-26 12:50:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6358a28f-403a-11ea-a994-fa163e34d433 0xc00232cfd7 0xc00232cfd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232d1a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232d1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-26 12:50:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.515: INFO: Pod "nginx-deployment-85ddf47c5d-fdvjx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fdvjx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-fdvjx,UID:667be926-403a-11ea-a994-fa163e34d433,ResourceVersion:19526902,Generation:0,CreationTimestamp:2020-01-26 12:50:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc00232d3c7 0xc00232d3c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232d5a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232d5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.515: INFO: Pod "nginx-deployment-85ddf47c5d-hnc6d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hnc6d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-hnc6d,UID:48cf7576-403a-11ea-a994-fa163e34d433,ResourceVersion:19526808,Generation:0,CreationTimestamp:2020-01-26 12:49:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc00232d770 0xc00232d771}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232d7d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232d7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-26 12:49:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 12:50:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4d56a19f8299e564e925bca103698d410ca37af7f2b39985a1b98516d1371911}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.515: INFO: Pod "nginx-deployment-85ddf47c5d-hwhf6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hwhf6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-hwhf6,UID:48c2162e-403a-11ea-a994-fa163e34d433,ResourceVersion:19526803,Generation:0,CreationTimestamp:2020-01-26 12:49:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc00232d9f7 0xc00232d9f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232db30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232db50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-26 12:49:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 12:50:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c55de5f7212b4474c4877f0f4c6c426f37df5e41d0498d46b488e0176c8375f1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.515: INFO: Pod "nginx-deployment-85ddf47c5d-ntvlx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ntvlx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-ntvlx,UID:48ab9ed6-403a-11ea-a994-fa163e34d433,ResourceVersion:19526837,Generation:0,CreationTimestamp:2020-01-26 12:49:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc00232dcc7 0xc00232dcc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232dd40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232dd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-26 12:49:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 12:50:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://12053e51c4912132225cddfd4829f17c97633d6dedbba07f1af0ae85bff3a60d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.516: INFO: Pod "nginx-deployment-85ddf47c5d-pp7c5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pp7c5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-pp7c5,UID:48b2f165-403a-11ea-a994-fa163e34d433,ResourceVersion:19526824,Generation:0,CreationTimestamp:2020-01-26 12:49:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc00232de87 0xc00232de88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00232df00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00232df20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-26 12:49:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 12:50:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://59110d3794576baaf16864091103094cd1b30790cfef693438caacf0eec780c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.516: INFO: Pod "nginx-deployment-85ddf47c5d-q5dgt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q5dgt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-q5dgt,UID:48cf21eb-403a-11ea-a994-fa163e34d433,ResourceVersion:19526832,Generation:0,CreationTimestamp:2020-01-26 12:49:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc00232dff7 0xc00232dff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002182060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002182080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-26 12:49:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 12:50:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://89a6c72ddcedd8aaf5a0f6592775ccdb928cf6cae8514da250fae310875af222}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.516: INFO: Pod "nginx-deployment-85ddf47c5d-sn4b4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sn4b4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-sn4b4,UID:48b294dc-403a-11ea-a994-fa163e34d433,ResourceVersion:19526819,Generation:0,CreationTimestamp:2020-01-26 12:49:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc002182147 0xc002182148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021821b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021821d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-26 12:49:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 12:50:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7f966afb77f7c1387123d4dff77e0e2479aa3a6f4d45bf6f522e526b680262af}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.517: INFO: Pod "nginx-deployment-85ddf47c5d-v2n2d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v2n2d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-v2n2d,UID:48c231e5-403a-11ea-a994-fa163e34d433,ResourceVersion:19526813,Generation:0,CreationTimestamp:2020-01-26 12:49:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc002182297 0xc002182298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002182300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002182320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-26 12:49:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 12:49:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://76a41b773b96b490901ddb2e0429b73a02134730e22ae85cfbd40f52c7c0ecc3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 26 12:50:16.517: INFO: Pod "nginx-deployment-85ddf47c5d-znmt2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-znmt2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tx8k5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tx8k5/pods/nginx-deployment-85ddf47c5d-znmt2,UID:48c0cb84-403a-11ea-a994-fa163e34d433,ResourceVersion:19526840,Generation:0,CreationTimestamp:2020-01-26 12:49:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 48a33d99-403a-11ea-a994-fa163e34d433 0xc002182507 0xc002182508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-frf6k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frf6k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-frf6k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002182570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002182590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:50:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 12:49:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-26 12:49:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 12:50:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://65369c9d9fd86dd7c60d122b15fde8c563e52049ca1e1c647eb318e9e3fab9b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:50:16.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-tx8k5" for this suite.
Jan 26 12:51:21.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:51:21.306: INFO: namespace: e2e-tests-deployment-tx8k5, resource: bindings, ignored listing per whitelist
Jan 26 12:51:21.565: INFO: namespace e2e-tests-deployment-tx8k5 deletion completed in 1m4.582888627s

• [SLOW TEST:115.853 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:51:21.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 26 12:51:22.002: INFO: Number of nodes with available pods: 0
Jan 26 12:51:22.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:23.022: INFO: Number of nodes with available pods: 0
Jan 26 12:51:23.022: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:24.367: INFO: Number of nodes with available pods: 0
Jan 26 12:51:24.367: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:25.020: INFO: Number of nodes with available pods: 0
Jan 26 12:51:25.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:26.047: INFO: Number of nodes with available pods: 0
Jan 26 12:51:26.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:27.031: INFO: Number of nodes with available pods: 0
Jan 26 12:51:27.031: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:28.448: INFO: Number of nodes with available pods: 0
Jan 26 12:51:28.448: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:29.021: INFO: Number of nodes with available pods: 0
Jan 26 12:51:29.021: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:30.052: INFO: Number of nodes with available pods: 0
Jan 26 12:51:30.052: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:31.024: INFO: Number of nodes with available pods: 1
Jan 26 12:51:31.024: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 26 12:51:31.087: INFO: Number of nodes with available pods: 0
Jan 26 12:51:31.087: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:32.117: INFO: Number of nodes with available pods: 0
Jan 26 12:51:32.117: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:33.414: INFO: Number of nodes with available pods: 0
Jan 26 12:51:33.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:34.124: INFO: Number of nodes with available pods: 0
Jan 26 12:51:34.124: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:35.536: INFO: Number of nodes with available pods: 0
Jan 26 12:51:35.536: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:36.110: INFO: Number of nodes with available pods: 0
Jan 26 12:51:36.110: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:37.098: INFO: Number of nodes with available pods: 0
Jan 26 12:51:37.098: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:38.171: INFO: Number of nodes with available pods: 0
Jan 26 12:51:38.171: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:39.107: INFO: Number of nodes with available pods: 0
Jan 26 12:51:39.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:40.116: INFO: Number of nodes with available pods: 0
Jan 26 12:51:40.116: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:41.117: INFO: Number of nodes with available pods: 0
Jan 26 12:51:41.117: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:42.121: INFO: Number of nodes with available pods: 0
Jan 26 12:51:42.122: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:43.102: INFO: Number of nodes with available pods: 0
Jan 26 12:51:43.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:44.115: INFO: Number of nodes with available pods: 0
Jan 26 12:51:44.115: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:45.107: INFO: Number of nodes with available pods: 0
Jan 26 12:51:45.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:46.106: INFO: Number of nodes with available pods: 0
Jan 26 12:51:46.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:47.359: INFO: Number of nodes with available pods: 0
Jan 26 12:51:47.360: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:48.132: INFO: Number of nodes with available pods: 0
Jan 26 12:51:48.132: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:49.115: INFO: Number of nodes with available pods: 0
Jan 26 12:51:49.115: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:50.111: INFO: Number of nodes with available pods: 0
Jan 26 12:51:50.111: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 26 12:51:51.103: INFO: Number of nodes with available pods: 1
Jan 26 12:51:51.103: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-27mkj, will wait for the garbage collector to delete the pods
Jan 26 12:51:51.177: INFO: Deleting DaemonSet.extensions daemon-set took: 13.175662ms
Jan 26 12:51:51.377: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.280921ms
Jan 26 12:51:59.084: INFO: Number of nodes with available pods: 0
Jan 26 12:51:59.084: INFO: Number of running nodes: 0, number of available pods: 0
Jan 26 12:51:59.088: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-27mkj/daemonsets","resourceVersion":"19527448"},"items":null}

Jan 26 12:51:59.093: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-27mkj/pods","resourceVersion":"19527448"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:51:59.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-27mkj" for this suite.
Jan 26 12:52:07.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:52:07.250: INFO: namespace: e2e-tests-daemonsets-27mkj, resource: bindings, ignored listing per whitelist
Jan 26 12:52:07.389: INFO: namespace e2e-tests-daemonsets-27mkj deletion completed in 8.223551621s

• [SLOW TEST:45.824 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:52:07.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w4fqf A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-w4fqf;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w4fqf A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-w4fqf;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w4fqf.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-w4fqf.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w4fqf.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-w4fqf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-w4fqf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 152.236.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.236.152_udp@PTR;check="$$(dig +tcp +noall +answer +search 152.236.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.236.152_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w4fqf A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-w4fqf;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w4fqf A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-w4fqf;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w4fqf.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-w4fqf.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w4fqf.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-w4fqf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-w4fqf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 152.236.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.236.152_udp@PTR;check="$$(dig +tcp +noall +answer +search 152.236.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.236.152_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 12:52:24.033: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.059: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.078: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-w4fqf from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.098: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-w4fqf from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.114: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.128: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.136: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.141: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.144: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.147: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.151: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.154: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.158: INFO: Unable to read 10.101.236.152_udp@PTR from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.162: INFO: Unable to read 10.101.236.152_tcp@PTR from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.165: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.169: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.172: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4fqf from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.176: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4fqf from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.180: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.183: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.187: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.192: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.201: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.205: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.210: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.217: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.221: INFO: Unable to read 10.101.236.152_udp@PTR from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.226: INFO: Unable to read 10.101.236.152_tcp@PTR from pod e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005: the server could not find the requested resource (get pods dns-test-a90e670f-403a-11ea-b664-0242ac110005)
Jan 26 12:52:24.226: INFO: Lookups using e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-w4fqf wheezy_tcp@dns-test-service.e2e-tests-dns-w4fqf wheezy_udp@dns-test-service.e2e-tests-dns-w4fqf.svc wheezy_tcp@dns-test-service.e2e-tests-dns-w4fqf.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.236.152_udp@PTR 10.101.236.152_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w4fqf jessie_tcp@dns-test-service.e2e-tests-dns-w4fqf jessie_udp@dns-test-service.e2e-tests-dns-w4fqf.svc jessie_tcp@dns-test-service.e2e-tests-dns-w4fqf.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4fqf.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4fqf.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.236.152_udp@PTR 10.101.236.152_tcp@PTR]

Jan 26 12:52:29.450: INFO: DNS probes using e2e-tests-dns-w4fqf/dns-test-a90e670f-403a-11ea-b664-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:52:31.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-w4fqf" for this suite.
Jan 26 12:52:37.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:52:37.731: INFO: namespace: e2e-tests-dns-w4fqf, resource: bindings, ignored listing per whitelist
Jan 26 12:52:37.794: INFO: namespace e2e-tests-dns-w4fqf deletion completed in 6.203328784s

• [SLOW TEST:30.405 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:52:37.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 26 12:52:38.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:40.068: INFO: stderr: ""
Jan 26 12:52:40.069: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 12:52:40.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:40.237: INFO: stderr: ""
Jan 26 12:52:40.237: INFO: stdout: "update-demo-nautilus-lxzzr update-demo-nautilus-zm6pp "
Jan 26 12:52:40.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxzzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:40.446: INFO: stderr: ""
Jan 26 12:52:40.446: INFO: stdout: ""
Jan 26 12:52:40.446: INFO: update-demo-nautilus-lxzzr is created but not running
Jan 26 12:52:45.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:45.578: INFO: stderr: ""
Jan 26 12:52:45.578: INFO: stdout: "update-demo-nautilus-lxzzr update-demo-nautilus-zm6pp "
Jan 26 12:52:45.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxzzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:45.667: INFO: stderr: ""
Jan 26 12:52:45.668: INFO: stdout: ""
Jan 26 12:52:45.668: INFO: update-demo-nautilus-lxzzr is created but not running
Jan 26 12:52:50.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:50.871: INFO: stderr: ""
Jan 26 12:52:50.871: INFO: stdout: "update-demo-nautilus-lxzzr update-demo-nautilus-zm6pp "
Jan 26 12:52:50.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxzzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:51.010: INFO: stderr: ""
Jan 26 12:52:51.010: INFO: stdout: "true"
Jan 26 12:52:51.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxzzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:51.145: INFO: stderr: ""
Jan 26 12:52:51.145: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 12:52:51.145: INFO: validating pod update-demo-nautilus-lxzzr
Jan 26 12:52:51.172: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 12:52:51.172: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 12:52:51.172: INFO: update-demo-nautilus-lxzzr is verified up and running
Jan 26 12:52:51.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm6pp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:51.276: INFO: stderr: ""
Jan 26 12:52:51.276: INFO: stdout: ""
Jan 26 12:52:51.276: INFO: update-demo-nautilus-zm6pp is created but not running
Jan 26 12:52:56.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:56.439: INFO: stderr: ""
Jan 26 12:52:56.439: INFO: stdout: "update-demo-nautilus-lxzzr update-demo-nautilus-zm6pp "
Jan 26 12:52:56.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxzzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:56.602: INFO: stderr: ""
Jan 26 12:52:56.603: INFO: stdout: "true"
Jan 26 12:52:56.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxzzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:56.728: INFO: stderr: ""
Jan 26 12:52:56.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 12:52:56.728: INFO: validating pod update-demo-nautilus-lxzzr
Jan 26 12:52:56.738: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 12:52:56.738: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 12:52:56.738: INFO: update-demo-nautilus-lxzzr is verified up and running
Jan 26 12:52:56.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm6pp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:56.860: INFO: stderr: ""
Jan 26 12:52:56.860: INFO: stdout: "true"
Jan 26 12:52:56.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm6pp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:52:57.008: INFO: stderr: ""
Jan 26 12:52:57.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 12:52:57.008: INFO: validating pod update-demo-nautilus-zm6pp
Jan 26 12:52:57.018: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 12:52:57.018: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 12:52:57.018: INFO: update-demo-nautilus-zm6pp is verified up and running
STEP: rolling-update to new replication controller
Jan 26 12:52:57.020: INFO: scanned /root for discovery docs: 
Jan 26 12:52:57.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:53:32.564: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 26 12:53:32.564: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 12:53:32.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:53:32.710: INFO: stderr: ""
Jan 26 12:53:32.710: INFO: stdout: "update-demo-kitten-9np4l update-demo-kitten-ppjwg "
Jan 26 12:53:32.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9np4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:53:32.856: INFO: stderr: ""
Jan 26 12:53:32.856: INFO: stdout: "true"
Jan 26 12:53:32.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9np4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:53:33.146: INFO: stderr: ""
Jan 26 12:53:33.146: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 26 12:53:33.146: INFO: validating pod update-demo-kitten-9np4l
Jan 26 12:53:33.173: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 26 12:53:33.173: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 26 12:53:33.173: INFO: update-demo-kitten-9np4l is verified up and running
Jan 26 12:53:33.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ppjwg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:53:33.253: INFO: stderr: ""
Jan 26 12:53:33.253: INFO: stdout: "true"
Jan 26 12:53:33.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ppjwg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8v8th'
Jan 26 12:53:33.377: INFO: stderr: ""
Jan 26 12:53:33.377: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 26 12:53:33.377: INFO: validating pod update-demo-kitten-ppjwg
Jan 26 12:53:33.390: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 26 12:53:33.390: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 26 12:53:33.390: INFO: update-demo-kitten-ppjwg is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:53:33.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8v8th" for this suite.
Jan 26 12:54:03.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:54:03.671: INFO: namespace: e2e-tests-kubectl-8v8th, resource: bindings, ignored listing per whitelist
Jan 26 12:54:03.681: INFO: namespace e2e-tests-kubectl-8v8th deletion completed in 30.268487029s

• [SLOW TEST:85.886 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:54:03.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ee76d8f9-403a-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 12:54:04.688: INFO: Waiting up to 5m0s for pod "pod-secrets-eec659c2-403a-11ea-b664-0242ac110005" in namespace "e2e-tests-secrets-9rpbs" to be "success or failure"
Jan 26 12:54:04.696: INFO: Pod "pod-secrets-eec659c2-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120936ms
Jan 26 12:54:06.932: INFO: Pod "pod-secrets-eec659c2-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243713248s
Jan 26 12:54:08.945: INFO: Pod "pod-secrets-eec659c2-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256980288s
Jan 26 12:54:11.091: INFO: Pod "pod-secrets-eec659c2-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403567998s
Jan 26 12:54:13.112: INFO: Pod "pod-secrets-eec659c2-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424628679s
Jan 26 12:54:15.148: INFO: Pod "pod-secrets-eec659c2-403a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.460582164s
STEP: Saw pod success
Jan 26 12:54:15.148: INFO: Pod "pod-secrets-eec659c2-403a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:54:15.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-eec659c2-403a-11ea-b664-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 26 12:54:15.391: INFO: Waiting for pod pod-secrets-eec659c2-403a-11ea-b664-0242ac110005 to disappear
Jan 26 12:54:15.474: INFO: Pod pod-secrets-eec659c2-403a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:54:15.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9rpbs" for this suite.
Jan 26 12:54:21.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:54:21.772: INFO: namespace: e2e-tests-secrets-9rpbs, resource: bindings, ignored listing per whitelist
Jan 26 12:54:21.875: INFO: namespace e2e-tests-secrets-9rpbs deletion completed in 6.385233693s
STEP: Destroying namespace "e2e-tests-secret-namespace-9sqzj" for this suite.
Jan 26 12:54:27.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:54:28.049: INFO: namespace: e2e-tests-secret-namespace-9sqzj, resource: bindings, ignored listing per whitelist
Jan 26 12:54:28.105: INFO: namespace e2e-tests-secret-namespace-9sqzj deletion completed in 6.229894226s

• [SLOW TEST:24.423 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:54:28.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 26 12:54:28.324: INFO: Waiting up to 5m0s for pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-nwf66" to be "success or failure"
Jan 26 12:54:28.469: INFO: Pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 144.613143ms
Jan 26 12:54:30.511: INFO: Pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186860458s
Jan 26 12:54:32.549: INFO: Pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225316875s
Jan 26 12:54:34.827: INFO: Pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503025921s
Jan 26 12:54:37.201: INFO: Pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.877173781s
Jan 26 12:54:39.238: INFO: Pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.914296715s
Jan 26 12:54:41.293: INFO: Pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.969385296s
STEP: Saw pod success
Jan 26 12:54:41.294: INFO: Pod "pod-fcdfcb75-403a-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:54:41.304: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fcdfcb75-403a-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 12:54:41.487: INFO: Waiting for pod pod-fcdfcb75-403a-11ea-b664-0242ac110005 to disappear
Jan 26 12:54:41.518: INFO: Pod pod-fcdfcb75-403a-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:54:41.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nwf66" for this suite.
Jan 26 12:54:50.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:54:50.354: INFO: namespace: e2e-tests-emptydir-nwf66, resource: bindings, ignored listing per whitelist
Jan 26 12:54:50.421: INFO: namespace e2e-tests-emptydir-nwf66 deletion completed in 8.863707081s

• [SLOW TEST:22.316 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:54:50.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 26 12:54:50.779: INFO: Waiting up to 5m0s for pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005" in namespace "e2e-tests-var-expansion-2846b" to be "success or failure"
Jan 26 12:54:50.977: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 197.568713ms
Jan 26 12:54:53.014: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235113727s
Jan 26 12:54:55.038: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258565241s
Jan 26 12:54:57.125: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345757046s
Jan 26 12:54:59.142: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.362997795s
Jan 26 12:55:02.529: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.750193831s
Jan 26 12:55:04.716: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.936815368s
Jan 26 12:55:06.749: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.970211573s
Jan 26 12:55:08.786: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.00694828s
Jan 26 12:55:11.084: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.305294767s
STEP: Saw pod success
Jan 26 12:55:11.085: INFO: Pod "var-expansion-0a425255-403b-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:55:11.122: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-0a425255-403b-11ea-b664-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 26 12:55:11.448: INFO: Waiting for pod var-expansion-0a425255-403b-11ea-b664-0242ac110005 to disappear
Jan 26 12:55:11.454: INFO: Pod var-expansion-0a425255-403b-11ea-b664-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:55:11.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2846b" for this suite.
Jan 26 12:55:18.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:55:18.846: INFO: namespace: e2e-tests-var-expansion-2846b, resource: bindings, ignored listing per whitelist
Jan 26 12:55:18.902: INFO: namespace e2e-tests-var-expansion-2846b deletion completed in 7.436823873s

• [SLOW TEST:28.481 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:55:18.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 26 12:55:19.356: INFO: Waiting up to 5m0s for pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005" in namespace "e2e-tests-downward-api-zpm76" to be "success or failure"
Jan 26 12:55:20.888: INFO: Pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 1.531074598s
Jan 26 12:55:22.902: INFO: Pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.545135158s
Jan 26 12:55:24.933: INFO: Pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.575800143s
Jan 26 12:55:26.946: INFO: Pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.589153357s
Jan 26 12:55:28.957: INFO: Pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.600366078s
Jan 26 12:55:30.971: INFO: Pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.614332825s
Jan 26 12:55:33.000: INFO: Pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.642869966s
STEP: Saw pod success
Jan 26 12:55:33.000: INFO: Pod "downward-api-1b484e9f-403b-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:55:33.012: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1b484e9f-403b-11ea-b664-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 26 12:55:33.139: INFO: Waiting for pod downward-api-1b484e9f-403b-11ea-b664-0242ac110005 to disappear
Jan 26 12:55:33.152: INFO: Pod downward-api-1b484e9f-403b-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:55:33.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zpm76" for this suite.
Jan 26 12:55:39.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:55:39.369: INFO: namespace: e2e-tests-downward-api-zpm76, resource: bindings, ignored listing per whitelist
Jan 26 12:55:39.401: INFO: namespace e2e-tests-downward-api-zpm76 deletion completed in 6.24340458s

• [SLOW TEST:20.499 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:55:39.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:56:39.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hmm58" for this suite.
Jan 26 12:57:05.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:57:05.695: INFO: namespace: e2e-tests-container-probe-hmm58, resource: bindings, ignored listing per whitelist
Jan 26 12:57:05.811: INFO: namespace e2e-tests-container-probe-hmm58 deletion completed in 26.226580516s

• [SLOW TEST:86.410 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:57:05.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-5b03c4bc-403b-11ea-b664-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 26 12:57:06.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-xbrb5" to be "success or failure"
Jan 26 12:57:06.407: INFO: Pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.390441ms
Jan 26 12:57:08.593: INFO: Pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208762297s
Jan 26 12:57:10.638: INFO: Pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253772685s
Jan 26 12:57:12.917: INFO: Pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5337069s
Jan 26 12:57:14.946: INFO: Pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561906858s
Jan 26 12:57:16.973: INFO: Pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.589618868s
Jan 26 12:57:19.014: INFO: Pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.629917329s
STEP: Saw pod success
Jan 26 12:57:19.014: INFO: Pod "pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:57:19.031: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 12:57:19.264: INFO: Waiting for pod pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005 to disappear
Jan 26 12:57:19.287: INFO: Pod pod-projected-secrets-5b137bc8-403b-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:57:19.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xbrb5" for this suite.
Jan 26 12:57:25.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:57:25.557: INFO: namespace: e2e-tests-projected-xbrb5, resource: bindings, ignored listing per whitelist
Jan 26 12:57:25.570: INFO: namespace e2e-tests-projected-xbrb5 deletion completed in 6.278091883s

• [SLOW TEST:19.758 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:57:25.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 26 12:57:25.889: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005" in namespace "e2e-tests-projected-8d59b" to be "success or failure"
Jan 26 12:57:25.903: INFO: Pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.523245ms
Jan 26 12:57:27.967: INFO: Pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07795419s
Jan 26 12:57:29.984: INFO: Pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094932682s
Jan 26 12:57:32.632: INFO: Pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.742449846s
Jan 26 12:57:34.672: INFO: Pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.782486286s
Jan 26 12:57:36.698: INFO: Pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.80843491s
Jan 26 12:57:38.926: INFO: Pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.036843454s
STEP: Saw pod success
Jan 26 12:57:38.926: INFO: Pod "downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 12:57:38.941: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005 container client-container: 
STEP: delete the pod
Jan 26 12:57:39.099: INFO: Waiting for pod downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005 to disappear
Jan 26 12:57:39.103: INFO: Pod downwardapi-volume-66b24af5-403b-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 12:57:39.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8d59b" for this suite.
Jan 26 12:57:47.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 12:57:47.218: INFO: namespace: e2e-tests-projected-8d59b, resource: bindings, ignored listing per whitelist
Jan 26 12:57:47.310: INFO: namespace e2e-tests-projected-8d59b deletion completed in 8.199951153s

• [SLOW TEST:21.740 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 12:57:47.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 26 12:57:50.134: INFO: Pod name wrapped-volume-race-750f2dbf-403b-11ea-b664-0242ac110005: Found 0 pods out of 5
Jan 26 12:57:55.156: INFO: Pod name wrapped-volume-race-750f2dbf-403b-11ea-b664-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-750f2dbf-403b-11ea-b664-0242ac110005 in namespace e2e-tests-emptydir-wrapper-jvbg4, will wait for the garbage collector to delete the pods
Jan 26 13:00:09.318: INFO: Deleting ReplicationController wrapped-volume-race-750f2dbf-403b-11ea-b664-0242ac110005 took: 24.949127ms
Jan 26 13:00:10.519: INFO: Terminating ReplicationController wrapped-volume-race-750f2dbf-403b-11ea-b664-0242ac110005 pods took: 1.200812877s
STEP: Creating RC which spawns configmap-volume pods
Jan 26 13:00:53.900: INFO: Pod name wrapped-volume-race-e299c8f3-403b-11ea-b664-0242ac110005: Found 0 pods out of 5
Jan 26 13:00:58.928: INFO: Pod name wrapped-volume-race-e299c8f3-403b-11ea-b664-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e299c8f3-403b-11ea-b664-0242ac110005 in namespace e2e-tests-emptydir-wrapper-jvbg4, will wait for the garbage collector to delete the pods
Jan 26 13:03:05.159: INFO: Deleting ReplicationController wrapped-volume-race-e299c8f3-403b-11ea-b664-0242ac110005 took: 78.35202ms
Jan 26 13:03:05.360: INFO: Terminating ReplicationController wrapped-volume-race-e299c8f3-403b-11ea-b664-0242ac110005 pods took: 200.967952ms
STEP: Creating RC which spawns configmap-volume pods
Jan 26 13:03:53.180: INFO: Pod name wrapped-volume-race-4d7f1ec7-403c-11ea-b664-0242ac110005: Found 0 pods out of 5
Jan 26 13:03:58.210: INFO: Pod name wrapped-volume-race-4d7f1ec7-403c-11ea-b664-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4d7f1ec7-403c-11ea-b664-0242ac110005 in namespace e2e-tests-emptydir-wrapper-jvbg4, will wait for the garbage collector to delete the pods
Jan 26 13:06:02.398: INFO: Deleting ReplicationController wrapped-volume-race-4d7f1ec7-403c-11ea-b664-0242ac110005 took: 27.800647ms
Jan 26 13:06:02.899: INFO: Terminating ReplicationController wrapped-volume-race-4d7f1ec7-403c-11ea-b664-0242ac110005 pods took: 500.834115ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 13:06:55.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-jvbg4" for this suite.
Jan 26 13:07:03.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 13:07:03.465: INFO: namespace: e2e-tests-emptydir-wrapper-jvbg4, resource: bindings, ignored listing per whitelist
Jan 26 13:07:03.676: INFO: namespace e2e-tests-emptydir-wrapper-jvbg4 deletion completed in 8.353753673s

• [SLOW TEST:556.365 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 13:07:03.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 26 13:07:03.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mh4cr'
Jan 26 13:07:06.040: INFO: stderr: ""
Jan 26 13:07:06.040: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 26 13:07:08.286: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:08.286: INFO: Found 0 / 1
Jan 26 13:07:09.943: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:09.944: INFO: Found 0 / 1
Jan 26 13:07:11.535: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:11.535: INFO: Found 0 / 1
Jan 26 13:07:12.057: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:12.057: INFO: Found 0 / 1
Jan 26 13:07:13.055: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:13.055: INFO: Found 0 / 1
Jan 26 13:07:14.104: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:14.104: INFO: Found 0 / 1
Jan 26 13:07:15.758: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:15.759: INFO: Found 0 / 1
Jan 26 13:07:16.324: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:16.324: INFO: Found 0 / 1
Jan 26 13:07:17.048: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:17.048: INFO: Found 0 / 1
Jan 26 13:07:18.115: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:18.115: INFO: Found 0 / 1
Jan 26 13:07:19.102: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:19.102: INFO: Found 1 / 1
Jan 26 13:07:19.102: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 26 13:07:19.188: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:19.188: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 26 13:07:19.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-d5zp8 --namespace=e2e-tests-kubectl-mh4cr -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 26 13:07:19.367: INFO: stderr: ""
Jan 26 13:07:19.367: INFO: stdout: "pod/redis-master-d5zp8 patched\n"
STEP: checking annotations
Jan 26 13:07:19.377: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 13:07:19.377: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 13:07:19.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mh4cr" for this suite.
Jan 26 13:07:43.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 13:07:43.465: INFO: namespace: e2e-tests-kubectl-mh4cr, resource: bindings, ignored listing per whitelist
Jan 26 13:07:43.566: INFO: namespace e2e-tests-kubectl-mh4cr deletion completed in 24.184786011s

• [SLOW TEST:39.890 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 13:07:43.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 26 13:07:43.947: INFO: Waiting up to 5m0s for pod "pod-d719849d-403c-11ea-b664-0242ac110005" in namespace "e2e-tests-emptydir-blnqq" to be "success or failure"
Jan 26 13:07:43.957: INFO: Pod "pod-d719849d-403c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.91167ms
Jan 26 13:07:45.990: INFO: Pod "pod-d719849d-403c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043461757s
Jan 26 13:07:48.006: INFO: Pod "pod-d719849d-403c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059819067s
Jan 26 13:07:50.697: INFO: Pod "pod-d719849d-403c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750639222s
Jan 26 13:07:52.750: INFO: Pod "pod-d719849d-403c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803659911s
Jan 26 13:07:54.779: INFO: Pod "pod-d719849d-403c-11ea-b664-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.832767618s
Jan 26 13:07:56.909: INFO: Pod "pod-d719849d-403c-11ea-b664-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.962226611s
STEP: Saw pod success
Jan 26 13:07:56.909: INFO: Pod "pod-d719849d-403c-11ea-b664-0242ac110005" satisfied condition "success or failure"
Jan 26 13:07:56.916: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d719849d-403c-11ea-b664-0242ac110005 container test-container: 
STEP: delete the pod
Jan 26 13:07:57.193: INFO: Waiting for pod pod-d719849d-403c-11ea-b664-0242ac110005 to disappear
Jan 26 13:07:57.219: INFO: Pod pod-d719849d-403c-11ea-b664-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 13:07:57.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-blnqq" for this suite.
Jan 26 13:08:03.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 13:08:03.356: INFO: namespace: e2e-tests-emptydir-blnqq, resource: bindings, ignored listing per whitelist
Jan 26 13:08:03.452: INFO: namespace e2e-tests-emptydir-blnqq deletion completed in 6.219567159s

• [SLOW TEST:19.886 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 13:08:03.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-e2f1fd65-403c-11ea-b664-0242ac110005
STEP: Creating secret with name s-test-opt-upd-e2f1fea0-403c-11ea-b664-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-e2f1fd65-403c-11ea-b664-0242ac110005
STEP: Updating secret s-test-opt-upd-e2f1fea0-403c-11ea-b664-0242ac110005
STEP: Creating secret with name s-test-opt-create-e2f1fecb-403c-11ea-b664-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 13:08:18.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qpqrd" for this suite.
Jan 26 13:08:42.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 13:08:43.483: INFO: namespace: e2e-tests-secrets-qpqrd, resource: bindings, ignored listing per whitelist
Jan 26 13:08:43.590: INFO: namespace e2e-tests-secrets-qpqrd deletion completed in 24.813224832s

• [SLOW TEST:40.137 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 26 13:08:43.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 26 13:08:43.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 26 13:08:52.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-s9lmr" for this suite.
Jan 26 13:09:34.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 13:09:34.115: INFO: namespace: e2e-tests-pods-s9lmr, resource: bindings, ignored listing per whitelist
Jan 26 13:09:34.359: INFO: namespace e2e-tests-pods-s9lmr deletion completed in 42.316189567s

• [SLOW TEST:50.769 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSJan 26 13:09:34.360: INFO: Running AfterSuite actions on all nodes
Jan 26 13:09:34.360: INFO: Running AfterSuite actions on node 1
Jan 26 13:09:34.360: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 8537.283 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8537.55s)
FAIL