2019-03-17 10:47:48,057 - xtesting.ci.run_tests - INFO - Deployment description: +--------------------------------------+----------------------------------------------------------+ | ENV VAR | VALUE | +--------------------------------------+----------------------------------------------------------+ | BUILD_TAG | | | ENERGY_RECORDER_API_URL | http://energy.opnfv.fr/resources | | ENERGY_RECORDER_API_PASSWORD | | | CI_LOOP | daily | | TEST_DB_URL | http://testresults.opnfv.org/test/api/v1/results | | INSTALLER_TYPE | unknown | | DEPLOY_SCENARIO | k8-nosdn-nofeature-noha | | ENERGY_RECORDER_API_USER | | | NODE_NAME | | +--------------------------------------+----------------------------------------------------------+ 2019-03-17 10:47:48,059 - xtesting.ci.run_tests - DEBUG - No env file /var/lib/xtesting/conf/env_file found 2019-03-17 10:47:48,059 - xtesting.ci.run_tests - DEBUG - Test args: k8s_conformance 2019-03-17 10:47:48,062 - xtesting.ci.run_tests - INFO - Loading test case 'k8s_conformance'... 2019-03-17 10:47:48,069 - xtesting.ci.run_tests - INFO - Running test case 'k8s_conformance'... 2019-03-17 10:47:48,069 - functest_kubernetes.k8stest - INFO - Starting k8s test: '['e2e.test', '-ginkgo.focus', '\\[Conformance\\]', '-ginkgo.noColor', '-ginkgo.skip', 'Alpha|\\[(Disruptive|Feature:[^\\]]+|Flaky)\\]', '-kubeconfig', '/root/.kube/config', '-provider', 'local', '-report-dir', '/home/opnfv/functest/results/k8s_conformance']'. 2019-03-17 12:44:31,325 - functest_kubernetes.k8stest - ERROR - Error with running kubetest: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/functest_kubernetes/k8stest.py", line 107, in run self.run_kubetest() File "/usr/lib/python2.7/site-packages/functest_kubernetes/k8stest.py", line 52, in run_kubetest raise Exception(output) Exception: I0317 10:47:48.698517 8 e2e.go:224] Starting e2e run "1ab0292e-48a2-11e9-bf64-0242ac110009" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1552819668 - Will randomize all specs Will run 201 of 2161 specs Mar 17 10:47:48.862: INFO: >>> kubeConfig: /root/.kube/config Mar 17 10:47:48.865: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 17 10:47:48.875: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 17 10:47:48.906: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 17 10:47:48.906: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 17 10:47:48.906: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 17 10:47:48.919: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 17 10:47:48.919: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Mar 17 10:47:48.919: INFO: e2e test version: v1.13.4 Mar 17 10:47:48.920: INFO: kube-apiserver version: v1.13.4 S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:47:48.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Mar 17 10:47:49.183: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1b535cf7-48a2-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume configMaps Mar 17 10:47:49.224: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b54669e-48a2-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-dqzd2" to be "success or failure" Mar 17 10:47:49.240: INFO: Pod "pod-projected-configmaps-1b54669e-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.912241ms Mar 17 10:47:51.299: INFO: Pod "pod-projected-configmaps-1b54669e-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075858361s Mar 17 10:47:53.303: INFO: Pod "pod-projected-configmaps-1b54669e-48a2-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079892946s STEP: Saw pod success Mar 17 10:47:53.303: INFO: Pod "pod-projected-configmaps-1b54669e-48a2-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 10:47:53.308: INFO: Trying to get logs from node kube pod pod-projected-configmaps-1b54669e-48a2-11e9-bf64-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Mar 17 10:47:53.344: INFO: Waiting for pod pod-projected-configmaps-1b54669e-48a2-11e9-bf64-0242ac110009 to disappear Mar 17 10:47:53.710: INFO: Pod pod-projected-configmaps-1b54669e-48a2-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:47:53.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dqzd2" for this suite. Mar 17 10:47:59.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:47:59.774: INFO: namespace: e2e-tests-projected-dqzd2, resource: bindings, ignored listing per whitelist Mar 17 10:47:59.828: INFO: namespace e2e-tests-projected-dqzd2 deletion completed in 6.114694729s • [SLOW TEST:10.908 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:47:59.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6277k A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-6277k;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6277k A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-6277k;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6277k.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-6277k.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6277k.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-6277k.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6277k.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6277k.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6277k.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6277k.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6277k.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 9.87.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.87.9_udp@PTR;check="$$(dig +tcp +noall +answer +search 9.87.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.87.9_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6277k A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-6277k;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6277k A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-6277k;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6277k.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-6277k.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6277k.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-6277k.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6277k.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6277k.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6277k.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6277k.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6277k.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 9.87.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.87.9_udp@PTR;check="$$(dig +tcp +noall +answer +search 9.87.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.87.9_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 17 10:48:08.310: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.385: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.388: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6277k from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.390: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-6277k from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.394: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.396: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.398: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.402: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.409: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.412: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.415: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.419: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.441: INFO: Unable to read 10.109.87.9_udp@PTR from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.444: INFO: Unable to read 10.109.87.9_tcp@PTR from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.447: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.450: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.452: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6277k from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.455: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6277k from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.463: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.467: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.469: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.471: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.473: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.475: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.477: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.480: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.482: INFO: Unable to read 10.109.87.9_udp@PTR from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.484: INFO: Unable to read 10.109.87.9_tcp@PTR from pod e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-21e06af8-48a2-11e9-bf64-0242ac110009) Mar 17 10:48:08.484: INFO: Lookups using e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-6277k wheezy_tcp@dns-test-service.e2e-tests-dns-6277k wheezy_udp@dns-test-service.e2e-tests-dns-6277k.svc wheezy_tcp@dns-test-service.e2e-tests-dns-6277k.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.87.9_udp@PTR 10.109.87.9_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6277k jessie_tcp@dns-test-service.e2e-tests-dns-6277k jessie_udp@dns-test-service.e2e-tests-dns-6277k.svc jessie_tcp@dns-test-service.e2e-tests-dns-6277k.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6277k.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6277k.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.87.9_udp@PTR 10.109.87.9_tcp@PTR] Mar 17 10:48:15.691: INFO: DNS probes using e2e-tests-dns-6277k/dns-test-21e06af8-48a2-11e9-bf64-0242ac110009 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:48:15.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-6277k" for this suite. Mar 17 10:48:26.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:48:26.667: INFO: namespace: e2e-tests-dns-6277k, resource: bindings, ignored listing per whitelist Mar 17 10:48:26.689: INFO: namespace e2e-tests-dns-6277k deletion completed in 10.673836934s • [SLOW TEST:26.860 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:48:26.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:48:36.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-k4cxx" for this suite. Mar 17 10:48:44.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:48:44.331: INFO: namespace: e2e-tests-namespaces-k4cxx, resource: bindings, ignored listing per whitelist Mar 17 10:48:44.393: INFO: namespace e2e-tests-namespaces-k4cxx deletion completed in 8.219647035s STEP: Destroying namespace "e2e-tests-nsdeletetest-k4jr8" for this suite. Mar 17 10:48:44.396: INFO: Namespace e2e-tests-nsdeletetest-k4jr8 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-gsssv" for this suite. Mar 17 10:48:50.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:48:50.608: INFO: namespace: e2e-tests-nsdeletetest-gsssv, resource: bindings, ignored listing per whitelist Mar 17 10:48:50.622: INFO: namespace e2e-tests-nsdeletetest-gsssv deletion completed in 6.226913501s • [SLOW TEST:23.934 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:48:50.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 17 10:49:00.317: INFO: Successfully updated pod "labelsupdate40382863-48a2-11e9-bf64-0242ac110009" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:49:02.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q8gwt" for this suite. Mar 17 10:49:26.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:49:26.769: INFO: namespace: e2e-tests-projected-q8gwt, resource: bindings, ignored listing per whitelist Mar 17 10:49:26.842: INFO: namespace e2e-tests-projected-q8gwt deletion completed in 24.175775908s • [SLOW TEST:36.219 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:49:26.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 10:49:27.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-zpg6j' Mar 17 10:49:34.108: INFO: stderr: "" Mar 17 10:49:34.108: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 17 10:49:39.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-zpg6j -o json' Mar 17 10:49:39.247: INFO: stderr: "" Mar 17 10:49:39.247: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-03-17T10:49:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-zpg6j\",\n \"resourceVersion\": \"1282118\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-zpg6j/pods/e2e-test-nginx-pod\",\n \"uid\": \"59dd0d93-48a2-11e9-a072-fa163e921bae\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-fndwd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kube\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-fndwd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-fndwd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-03-17T10:49:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-03-17T10:49:38Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-03-17T10:49:38Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-03-17T10:49:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://87a36f715e8d46896c2ffbd25a3c6653de22c55daf4ea96cd15a9f201fbef494\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-03-17T10:49:37Z\"\n }\n }\n }\n ],\n \"hostIP\": \"192.168.100.7\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-03-17T10:49:34Z\"\n }\n}\n" STEP: replace the image in the pod Mar 17 10:49:39.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-zpg6j' Mar 17 10:49:39.609: INFO: stderr: "" Mar 17 10:49:39.609: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Mar 17 10:49:39.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-zpg6j' Mar 17 10:49:50.668: INFO: stderr: "" Mar 17 10:49:50.668: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:49:50.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zpg6j" for this suite. Mar 17 10:49:56.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:49:56.719: INFO: namespace: e2e-tests-kubectl-zpg6j, resource: bindings, ignored listing per whitelist Mar 17 10:49:56.757: INFO: namespace e2e-tests-kubectl-zpg6j deletion completed in 6.080807151s • [SLOW TEST:29.915 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:49:56.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-676a896f-48a2-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume secrets Mar 17 10:49:57.032: INFO: Waiting up to 5m0s for pod "pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-cmwkf" to be "success or failure" Mar 17 10:49:57.053: INFO: Pod "pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 20.175324ms Mar 17 10:49:59.158: INFO: Pod "pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125483684s Mar 17 10:50:01.266: INFO: Pod "pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233428428s Mar 17 10:50:03.350: INFO: Pod "pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.317399562s STEP: Saw pod success Mar 17 10:50:03.350: INFO: Pod "pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 10:50:03.353: INFO: Trying to get logs from node kube pod pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009 container secret-volume-test: STEP: delete the pod Mar 17 10:50:03.554: INFO: Waiting for pod pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009 to disappear Mar 17 10:50:03.580: INFO: Pod pod-secrets-6782c4b8-48a2-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:50:03.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-cmwkf" for this suite. Mar 17 10:50:09.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:50:09.724: INFO: namespace: e2e-tests-secrets-cmwkf, resource: bindings, ignored listing per whitelist Mar 17 10:50:09.838: INFO: namespace e2e-tests-secrets-cmwkf deletion completed in 6.25133967s • [SLOW TEST:13.081 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:50:09.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 17 10:50:10.236: INFO: Waiting up to 5m0s for pod "pod-6f652335-48a2-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-cjzfx" to be "success or failure" Mar 17 10:50:10.269: INFO: Pod "pod-6f652335-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 33.091885ms Mar 17 10:50:12.272: INFO: Pod "pod-6f652335-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036305952s Mar 17 10:50:14.275: INFO: Pod "pod-6f652335-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03922128s Mar 17 10:50:16.278: INFO: Pod "pod-6f652335-48a2-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042190397s STEP: Saw pod success Mar 17 10:50:16.278: INFO: Pod "pod-6f652335-48a2-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 10:50:16.281: INFO: Trying to get logs from node kube pod pod-6f652335-48a2-11e9-bf64-0242ac110009 container test-container: STEP: delete the pod Mar 17 10:50:16.353: INFO: Waiting for pod pod-6f652335-48a2-11e9-bf64-0242ac110009 to disappear Mar 17 10:50:16.373: INFO: Pod pod-6f652335-48a2-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:50:16.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cjzfx" for this suite. Mar 17 10:50:22.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:50:22.505: INFO: namespace: e2e-tests-emptydir-cjzfx, resource: bindings, ignored listing per whitelist Mar 17 10:50:22.538: INFO: namespace e2e-tests-emptydir-cjzfx deletion completed in 6.160920689s • [SLOW TEST:12.700 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:50:22.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-6bc2 STEP: Creating a pod to test atomic-volume-subpath Mar 17 10:50:22.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6bc2" in namespace "e2e-tests-subpath-c8dxz" to be "success or failure" Mar 17 10:50:22.838: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265883ms Mar 17 10:50:24.841: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007265373s Mar 17 10:50:26.851: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017557939s Mar 17 10:50:28.992: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158807426s Mar 17 10:50:30.995: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161734957s Mar 17 10:50:32.998: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 10.164631668s Mar 17 10:50:35.016: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 12.182665306s Mar 17 10:50:37.033: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 14.199396936s Mar 17 10:50:39.036: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 16.202033685s Mar 17 10:50:41.039: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 18.205393209s Mar 17 10:50:43.042: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 20.208588952s Mar 17 10:50:45.539: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 22.705433738s Mar 17 10:50:47.543: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 24.70907555s Mar 17 10:50:49.551: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Running", Reason="", readiness=false. Elapsed: 26.71740646s Mar 17 10:50:51.556: INFO: Pod "pod-subpath-test-downwardapi-6bc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.722717749s STEP: Saw pod success Mar 17 10:50:51.556: INFO: Pod "pod-subpath-test-downwardapi-6bc2" satisfied condition "success or failure" Mar 17 10:50:51.562: INFO: Trying to get logs from node kube pod pod-subpath-test-downwardapi-6bc2 container test-container-subpath-downwardapi-6bc2: STEP: delete the pod Mar 17 10:50:51.606: INFO: Waiting for pod pod-subpath-test-downwardapi-6bc2 to disappear Mar 17 10:50:51.623: INFO: Pod pod-subpath-test-downwardapi-6bc2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-6bc2 Mar 17 10:50:51.623: INFO: Deleting pod "pod-subpath-test-downwardapi-6bc2" in namespace "e2e-tests-subpath-c8dxz" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:50:51.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-c8dxz" for this suite. Mar 17 10:50:57.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:50:57.840: INFO: namespace: e2e-tests-subpath-c8dxz, resource: bindings, ignored listing per whitelist Mar 17 10:50:57.875: INFO: namespace e2e-tests-subpath-c8dxz deletion completed in 6.236142444s • [SLOW TEST:35.337 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:50:57.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 10:50:58.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-gfmhg" to be "success or failure" Mar 17 10:50:58.115: INFO: Pod "downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.99384ms Mar 17 10:51:00.119: INFO: Pod "downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020955455s Mar 17 10:51:02.125: INFO: Pod "downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026941764s Mar 17 10:51:04.128: INFO: Pod "downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02992763s STEP: Saw pod success Mar 17 10:51:04.128: INFO: Pod "downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 10:51:04.136: INFO: Trying to get logs from node kube pod downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009 container client-container: STEP: delete the pod Mar 17 10:51:04.360: INFO: Waiting for pod downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009 to disappear Mar 17 10:51:04.368: INFO: Pod downwardapi-volume-8bebe289-48a2-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:51:04.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gfmhg" for this suite. Mar 17 10:51:12.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:51:12.523: INFO: namespace: e2e-tests-projected-gfmhg, resource: bindings, ignored listing per whitelist Mar 17 10:51:12.632: INFO: namespace e2e-tests-projected-gfmhg deletion completed in 8.245759055s • [SLOW TEST:14.757 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:51:12.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-dhp5k [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Mar 17 10:51:13.067: INFO: Found 0 stateful pods, waiting for 3 Mar 17 10:51:23.071: INFO: Found 2 stateful pods, waiting for 3 Mar 17 10:51:34.604: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:51:34.604: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:51:34.604: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 17 10:51:34.861: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 17 10:51:49.043: INFO: Updating stateful set ss2 Mar 17 10:51:49.055: INFO: Waiting for Pod e2e-tests-statefulset-dhp5k/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 STEP: Restoring Pods to the correct revision when they are deleted Mar 17 10:51:59.852: INFO: Found 1 stateful pods, waiting for 3 Mar 17 10:52:09.876: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:52:09.876: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:52:09.876: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 17 10:52:19.859: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:52:19.859: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 10:52:19.859: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 17 10:52:19.883: INFO: Updating stateful set ss2 Mar 17 10:52:20.129: INFO: Waiting for Pod e2e-tests-statefulset-dhp5k/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 Mar 17 10:52:30.154: INFO: Updating stateful set ss2 Mar 17 10:52:30.470: INFO: Waiting for StatefulSet e2e-tests-statefulset-dhp5k/ss2 to complete update Mar 17 10:52:30.470: INFO: Waiting for Pod e2e-tests-statefulset-dhp5k/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 Mar 17 10:52:40.475: INFO: Waiting for StatefulSet e2e-tests-statefulset-dhp5k/ss2 to complete update Mar 17 10:52:40.475: INFO: Waiting for Pod e2e-tests-statefulset-dhp5k/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 Mar 17 10:52:50.476: INFO: Waiting for StatefulSet e2e-tests-statefulset-dhp5k/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 17 10:53:00.476: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dhp5k Mar 17 10:53:00.478: INFO: Scaling statefulset ss2 to 0 Mar 17 10:53:22.174: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 10:53:22.195: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:53:22.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-dhp5k" for this suite. Mar 17 10:53:30.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:53:30.629: INFO: namespace: e2e-tests-statefulset-dhp5k, resource: bindings, ignored listing per whitelist Mar 17 10:53:30.775: INFO: namespace e2e-tests-statefulset-dhp5k deletion completed in 8.218245041s • [SLOW TEST:138.143 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:53:30.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 17 10:53:30.992: INFO: Waiting up to 5m0s for pod "pod-e70d85a8-48a2-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-rrmvd" to be "success or failure" Mar 17 10:53:30.998: INFO: Pod "pod-e70d85a8-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.490764ms Mar 17 10:53:33.015: INFO: Pod "pod-e70d85a8-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02283474s Mar 17 10:53:35.020: INFO: Pod "pod-e70d85a8-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028575908s Mar 17 10:53:37.024: INFO: Pod "pod-e70d85a8-48a2-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032448007s STEP: Saw pod success Mar 17 10:53:37.024: INFO: Pod "pod-e70d85a8-48a2-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 10:53:37.029: INFO: Trying to get logs from node kube pod pod-e70d85a8-48a2-11e9-bf64-0242ac110009 container test-container: STEP: delete the pod Mar 17 10:53:37.077: INFO: Waiting for pod pod-e70d85a8-48a2-11e9-bf64-0242ac110009 to disappear Mar 17 10:53:37.137: INFO: Pod pod-e70d85a8-48a2-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:53:37.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rrmvd" for this suite. Mar 17 10:53:45.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:53:45.118: INFO: namespace: e2e-tests-emptydir-rrmvd, resource: bindings, ignored listing per whitelist Mar 17 10:53:45.304: INFO: namespace e2e-tests-emptydir-rrmvd deletion completed in 8.160800424s • [SLOW TEST:14.529 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:53:45.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 10:53:45.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efad734c-48a2-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-vj468" to be "success or failure" Mar 17 10:53:45.499: INFO: Pod "downwardapi-volume-efad734c-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.533154ms Mar 17 10:53:47.518: INFO: Pod "downwardapi-volume-efad734c-48a2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03426749s Mar 17 10:53:49.797: INFO: Pod "downwardapi-volume-efad734c-48a2-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312920873s STEP: Saw pod success Mar 17 10:53:49.797: INFO: Pod "downwardapi-volume-efad734c-48a2-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 10:53:49.832: INFO: Trying to get logs from node kube pod downwardapi-volume-efad734c-48a2-11e9-bf64-0242ac110009 container client-container: STEP: delete the pod Mar 17 10:53:50.019: INFO: Waiting for pod downwardapi-volume-efad734c-48a2-11e9-bf64-0242ac110009 to disappear Mar 17 10:53:50.075: INFO: Pod downwardapi-volume-efad734c-48a2-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:53:50.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vj468" for this suite. Mar 17 10:53:56.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:53:56.244: INFO: namespace: e2e-tests-projected-vj468, resource: bindings, ignored listing per whitelist Mar 17 10:53:56.313: INFO: namespace e2e-tests-projected-vj468 deletion completed in 6.231669545s • [SLOW TEST:11.009 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:53:56.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 17 10:56:47.043: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:56:47.378: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:56:49.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:56:49.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:56:51.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:56:51.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:56:53.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:56:53.383: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:56:55.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:56:55.799: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:56:57.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:56:57.381: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:56:59.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:56:59.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:01.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:01.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:03.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:03.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:05.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:05.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:07.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:07.425: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:09.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:09.386: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:11.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:11.385: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:13.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:13.388: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:15.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:15.423: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:17.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:17.464: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:19.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:19.723: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:21.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:21.384: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:23.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:23.384: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:25.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:25.386: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:27.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:27.488: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:29.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:29.383: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:31.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:31.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:33.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:33.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:35.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:35.386: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:37.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:37.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:39.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:39.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:41.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:41.381: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:43.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:43.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:45.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:45.381: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:47.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:47.386: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:49.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:49.381: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:51.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:51.496: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:53.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:53.383: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:55.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:55.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:57.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:57.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:57:59.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:57:59.384: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:58:01.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:58:01.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:58:03.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:58:03.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:58:05.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:58:05.382: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:58:07.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:58:08.829: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:58:09.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:58:09.614: INFO: Pod pod-with-poststart-exec-hook still exists Mar 17 10:58:11.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 17 10:58:11.386: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 10:58:11.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-szsw7" for this suite. Mar 17 10:58:33.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 10:58:33.613: INFO: namespace: e2e-tests-container-lifecycle-hook-szsw7, resource: bindings, ignored listing per whitelist Mar 17 10:58:33.662: INFO: namespace e2e-tests-container-lifecycle-hook-szsw7 deletion completed in 22.273092481s • [SLOW TEST:277.349 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 10:58:33.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 17 10:58:35.262: INFO: Pod name wrapped-volume-race-9c56abb9-48a3-11e9-bf64-0242ac110009: Found 0 pods out of 5 Mar 17 10:58:40.678: INFO: Pod name wrapped-volume-race-9c56abb9-48a3-11e9-bf64-0242ac110009: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9c56abb9-48a3-11e9-bf64-0242ac110009 in namespace e2e-tests-emptydir-wrapper-5l4sg, will wait for the garbage collector to delete the pods Mar 17 10:59:02.795: INFO: Deleting ReplicationController wrapped-volume-race-9c56abb9-48a3-11e9-bf64-0242ac110009 took: 23.793922ms Mar 17 10:59:03.296: INFO: Terminating ReplicationController wrapped-volume-race-9c56abb9-48a3-11e9-bf64-0242ac110009 pods took: 500.217065ms STEP: Creating RC which spawns configmap-volume pods Mar 17 10:59:42.925: INFO: Pod name wrapped-volume-race-c4aba928-48a3-11e9-bf64-0242ac110009: Found 0 pods out of 5 Mar 17 10:59:47.996: INFO: Pod name wrapped-volume-race-c4aba928-48a3-11e9-bf64-0242ac110009: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c4aba928-48a3-11e9-bf64-0242ac110009 in namespace e2e-tests-emptydir-wrapper-5l4sg, will wait for the garbage collector to delete the pods Mar 17 11:00:14.103: INFO: Deleting ReplicationController wrapped-volume-race-c4aba928-48a3-11e9-bf64-0242ac110009 took: 7.791262ms Mar 17 11:00:14.404: INFO: Terminating ReplicationController wrapped-volume-race-c4aba928-48a3-11e9-bf64-0242ac110009 pods took: 300.155068ms STEP: Creating RC which spawns configmap-volume pods Mar 17 11:00:58.554: INFO: Pod name wrapped-volume-race-f1cbc516-48a3-11e9-bf64-0242ac110009: Found 0 pods out of 5 Mar 17 11:01:03.569: INFO: Pod name wrapped-volume-race-f1cbc516-48a3-11e9-bf64-0242ac110009: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f1cbc516-48a3-11e9-bf64-0242ac110009 in namespace e2e-tests-emptydir-wrapper-5l4sg, will wait for the garbage collector to delete the pods Mar 17 11:01:24.993: INFO: Deleting ReplicationController wrapped-volume-race-f1cbc516-48a3-11e9-bf64-0242ac110009 took: 106.821419ms Mar 17 11:01:25.293: INFO: Terminating ReplicationController wrapped-volume-race-f1cbc516-48a3-11e9-bf64-0242ac110009 pods took: 300.241337ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:02:05.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-5l4sg" for this suite. Mar 17 11:02:19.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:02:19.608: INFO: namespace: e2e-tests-emptydir-wrapper-5l4sg, resource: bindings, ignored listing per whitelist Mar 17 11:02:19.645: INFO: namespace e2e-tests-emptydir-wrapper-5l4sg deletion completed in 14.139612549s • [SLOW TEST:225.982 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:02:19.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 11:02:20.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-2mxz9' Mar 17 11:02:22.707: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 17 11:02:22.707: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Mar 17 11:02:24.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-2mxz9' Mar 17 11:02:27.327: INFO: stderr: "" Mar 17 11:02:27.327: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:02:27.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2mxz9" for this suite. Mar 17 11:02:37.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:02:37.087: INFO: namespace: e2e-tests-kubectl-2mxz9, resource: bindings, ignored listing per whitelist Mar 17 11:02:37.116: INFO: namespace e2e-tests-kubectl-2mxz9 deletion completed in 8.440268039s • [SLOW TEST:17.471 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:02:37.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 17 11:02:37.410: INFO: Waiting up to 5m0s for pod "pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-92t8w" to be "success or failure" Mar 17 11:02:37.450: INFO: Pod "pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 40.245208ms Mar 17 11:02:39.454: INFO: Pod "pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044420348s Mar 17 11:02:41.457: INFO: Pod "pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047098781s Mar 17 11:02:43.461: INFO: Pod "pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051073386s STEP: Saw pod success Mar 17 11:02:43.461: INFO: Pod "pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:02:43.463: INFO: Trying to get logs from node kube pod pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009 container test-container: STEP: delete the pod Mar 17 11:02:43.659: INFO: Waiting for pod pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009 to disappear Mar 17 11:02:43.673: INFO: Pod pod-2cb2ff5f-48a4-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:02:43.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-92t8w" for this suite. Mar 17 11:02:49.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:02:49.789: INFO: namespace: e2e-tests-emptydir-92t8w, resource: bindings, ignored listing per whitelist Mar 17 11:02:49.799: INFO: namespace e2e-tests-emptydir-92t8w deletion completed in 6.12194727s • [SLOW TEST:12.683 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:02:49.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-4jzx STEP: Creating a pod to test atomic-volume-subpath Mar 17 11:02:50.109: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4jzx" in namespace "e2e-tests-subpath-6r4fj" to be "success or failure" Mar 17 11:02:50.306: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Pending", Reason="", readiness=false. Elapsed: 196.210739ms Mar 17 11:02:52.435: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32532473s Mar 17 11:02:54.439: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329833582s Mar 17 11:02:56.518: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40835207s Mar 17 11:02:58.521: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411165485s Mar 17 11:03:00.524: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 10.41490489s Mar 17 11:03:02.528: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 12.418119928s Mar 17 11:03:04.532: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 14.422642928s Mar 17 11:03:06.542: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 16.432132137s Mar 17 11:03:08.546: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 18.43681063s Mar 17 11:03:10.551: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 20.441841951s Mar 17 11:03:12.555: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 22.445602567s Mar 17 11:03:14.558: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 24.449022159s Mar 17 11:03:16.562: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Running", Reason="", readiness=false. Elapsed: 26.453082275s Mar 17 11:03:19.191: INFO: Pod "pod-subpath-test-configmap-4jzx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.081367015s STEP: Saw pod success Mar 17 11:03:19.191: INFO: Pod "pod-subpath-test-configmap-4jzx" satisfied condition "success or failure" Mar 17 11:03:19.538: INFO: Trying to get logs from node kube pod pod-subpath-test-configmap-4jzx container test-container-subpath-configmap-4jzx: STEP: delete the pod Mar 17 11:03:19.836: INFO: Waiting for pod pod-subpath-test-configmap-4jzx to disappear Mar 17 11:03:19.864: INFO: Pod pod-subpath-test-configmap-4jzx no longer exists STEP: Deleting pod pod-subpath-test-configmap-4jzx Mar 17 11:03:19.865: INFO: Deleting pod "pod-subpath-test-configmap-4jzx" in namespace "e2e-tests-subpath-6r4fj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:03:19.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6r4fj" for this suite. Mar 17 11:03:28.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:03:28.095: INFO: namespace: e2e-tests-subpath-6r4fj, resource: bindings, ignored listing per whitelist Mar 17 11:03:28.205: INFO: namespace e2e-tests-subpath-6r4fj deletion completed in 8.331579112s • [SLOW TEST:38.406 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:03:28.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:03:28.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Mar 17 11:03:28.519: INFO: stderr: "" Mar 17 11:03:28.519: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.4\", GitCommit:\"c27b913fddd1a6c480c229191a087698aa92f0b1\", GitTreeState:\"clean\", BuildDate:\"2019-03-10T12:38:54Z\", GoVersion:\"go1.11.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Mar 17 11:03:28.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9hvc' Mar 17 11:03:28.887: INFO: stderr: "" Mar 17 11:03:28.887: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 17 11:03:28.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9hvc' Mar 17 11:03:30.928: INFO: stderr: "" Mar 17 11:03:30.928: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 17 11:03:31.934: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:03:31.934: INFO: Found 0 / 1 Mar 17 11:03:32.985: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:03:32.985: INFO: Found 0 / 1 Mar 17 11:03:33.932: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:03:33.932: INFO: Found 0 / 1 Mar 17 11:03:34.931: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:03:34.931: INFO: Found 0 / 1 Mar 17 11:03:35.932: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:03:35.932: INFO: Found 0 / 1 Mar 17 11:03:36.934: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:03:36.934: INFO: Found 1 / 1 Mar 17 11:03:36.934: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 17 11:03:36.937: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:03:36.937: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 17 11:03:36.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-rqxzs --namespace=e2e-tests-kubectl-w9hvc' Mar 17 11:03:37.031: INFO: stderr: "" Mar 17 11:03:37.031: INFO: stdout: "Name: redis-master-rqxzs\nNamespace: e2e-tests-kubectl-w9hvc\nPriority: 0\nPriorityClassName: \nNode: kube/192.168.100.7\nStart Time: Sun, 17 Mar 2019 11:03:29 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.32.0.4\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://5f0671145df30362bfa5ecd36e5f03e273a8e0fcc65d3d11226af3b0003b8908\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 17 Mar 2019 11:03:35 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dgk65 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dgk65:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dgk65\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned e2e-tests-kubectl-w9hvc/redis-master-rqxzs to kube\n Normal Pulled 4s kubelet, kube Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, kube Created container\n Normal Started 2s kubelet, kube Started container\n" Mar 17 11:03:37.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-w9hvc' Mar 17 11:03:37.128: INFO: stderr: "" Mar 17 11:03:37.128: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-w9hvc\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-rqxzs\n" Mar 17 11:03:37.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-w9hvc' Mar 17 11:03:37.209: INFO: stderr: "" Mar 17 11:03:37.209: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-w9hvc\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.107.136.219\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.32.0.4:6379\nSession Affinity: None\nEvents: \n" Mar 17 11:03:37.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node kube' Mar 17 11:03:37.314: INFO: stderr: "" Mar 17 11:03:37.314: INFO: stdout: "Name: kube\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=kube\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 09 Mar 2019 11:38:10 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 09 Mar 2019 11:38:35 +0000 Sat, 09 Mar 2019 11:38:35 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Sun, 17 Mar 2019 11:03:31 +0000 Sat, 09 Mar 2019 11:38:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 17 Mar 2019 11:03:31 +0000 Sat, 09 Mar 2019 11:38:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 17 Mar 2019 11:03:31 +0000 Sat, 09 Mar 2019 11:38:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 17 Mar 2019 11:03:31 +0000 Sat, 09 Mar 2019 11:38:41 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 192.168.100.7\n Hostname: kube\nCapacity:\n cpu: 4\n ephemeral-storage: 20263528Ki\n hugepages-2Mi: 0\n memory: 4045928Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18674867374\n hugepages-2Mi: 0\n memory: 3943528Ki\n pods: 110\nSystem Info:\n Machine ID: 9d25d7ed8378435ca43765c7c2778443\n System UUID: 9D25D7ED-8378-435C-A437-65C7C2778443\n Boot ID: 9c464167-55c7-41b6-850f-9cb6e463b07d\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 16.04.5 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.1\n Kubelet Version: v1.13.4\n Kube-Proxy Version: v1.13.4\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n e2e-tests-kubectl-w9hvc redis-master-rqxzs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\n kube-system coredns-86c58d9df4-lrf5x 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 7d23h\n kube-system coredns-86c58d9df4-xv8sl 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 7d23h\n kube-system etcd-kube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d23h\n kube-system kube-apiserver-kube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 7d23h\n kube-system kube-controller-manager-kube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 7d23h\n kube-system kube-proxy-6jlw8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d23h\n kube-system kube-scheduler-kube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 7d23h\n kube-system weave-net-47d2b 20m (0%) 0 (0%) 0 (0%) 0 (0%) 7d23h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 770m (19%) 0 (0%)\n memory 140Mi (3%) 340Mi (8%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 17 11:03:37.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-w9hvc' Mar 17 11:03:37.466: INFO: stderr: "" Mar 17 11:03:37.466: INFO: stdout: "Name: e2e-tests-kubectl-w9hvc\nLabels: e2e-framework=kubectl\n e2e-run=1ab0292e-48a2-11e9-bf64-0242ac110009\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:03:37.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w9hvc" for this suite. Mar 17 11:04:01.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:04:01.588: INFO: namespace: e2e-tests-kubectl-w9hvc, resource: bindings, ignored listing per whitelist Mar 17 11:04:01.633: INFO: namespace e2e-tests-kubectl-w9hvc deletion completed in 24.162418209s • [SLOW TEST:33.427 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:04:01.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-5f16f17c-48a4-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume secrets Mar 17 11:04:01.913: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-55fkh" to be "success or failure" Mar 17 11:04:01.934: INFO: Pod "pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 21.251356ms Mar 17 11:04:03.947: INFO: Pod "pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034032833s Mar 17 11:04:05.949: INFO: Pod "pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036850682s Mar 17 11:04:07.954: INFO: Pod "pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041118771s STEP: Saw pod success Mar 17 11:04:07.954: INFO: Pod "pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:04:07.956: INFO: Trying to get logs from node kube pod pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009 container secret-volume-test: STEP: delete the pod Mar 17 11:04:08.154: INFO: Waiting for pod pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009 to disappear Mar 17 11:04:08.158: INFO: Pod pod-projected-secrets-5f17ddfd-48a4-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:04:08.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-55fkh" for this suite. Mar 17 11:04:14.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:04:14.333: INFO: namespace: e2e-tests-projected-55fkh, resource: bindings, ignored listing per whitelist Mar 17 11:04:14.400: INFO: namespace e2e-tests-projected-55fkh deletion completed in 6.239023516s • [SLOW TEST:12.767 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:04:14.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Mar 17 11:04:14.713: INFO: Waiting up to 5m0s for pod "pod-66bdb818-48a4-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-cb622" to be "success or failure" Mar 17 11:04:14.718: INFO: Pod "pod-66bdb818-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.488242ms Mar 17 11:04:16.722: INFO: Pod "pod-66bdb818-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009002166s Mar 17 11:04:18.799: INFO: Pod "pod-66bdb818-48a4-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08650823s STEP: Saw pod success Mar 17 11:04:18.799: INFO: Pod "pod-66bdb818-48a4-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:04:18.801: INFO: Trying to get logs from node kube pod pod-66bdb818-48a4-11e9-bf64-0242ac110009 container test-container: STEP: delete the pod Mar 17 11:04:19.172: INFO: Waiting for pod pod-66bdb818-48a4-11e9-bf64-0242ac110009 to disappear Mar 17 11:04:19.214: INFO: Pod pod-66bdb818-48a4-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:04:19.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cb622" for this suite. Mar 17 11:04:25.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:04:25.410: INFO: namespace: e2e-tests-emptydir-cb622, resource: bindings, ignored listing per whitelist Mar 17 11:04:25.466: INFO: namespace e2e-tests-emptydir-cb622 deletion completed in 6.235740787s • [SLOW TEST:11.066 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:04:25.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Mar 17 11:04:26.167: INFO: created pod pod-service-account-defaultsa Mar 17 11:04:26.167: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 17 11:04:26.180: INFO: created pod pod-service-account-mountsa Mar 17 11:04:26.180: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 17 11:04:26.226: INFO: created pod pod-service-account-nomountsa Mar 17 11:04:26.226: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 17 11:04:26.514: INFO: created pod pod-service-account-defaultsa-mountspec Mar 17 11:04:26.514: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 17 11:04:26.529: INFO: created pod pod-service-account-mountsa-mountspec Mar 17 11:04:26.529: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 17 11:04:26.549: INFO: created pod pod-service-account-nomountsa-mountspec Mar 17 11:04:26.549: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 17 11:04:26.842: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 17 11:04:26.842: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 17 11:04:26.850: INFO: created pod pod-service-account-mountsa-nomountspec Mar 17 11:04:26.850: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 17 11:04:26.918: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 17 11:04:26.918: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:04:26.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-pnzp5" for this suite. Mar 17 11:05:09.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:05:09.367: INFO: namespace: e2e-tests-svcaccounts-pnzp5, resource: bindings, ignored listing per whitelist Mar 17 11:05:09.422: INFO: namespace e2e-tests-svcaccounts-pnzp5 deletion completed in 42.325810388s • [SLOW TEST:43.956 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:05:09.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 17 11:05:09.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:10.153: INFO: stderr: "" Mar 17 11:05:10.153: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 11:05:10.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:10.489: INFO: stderr: "" Mar 17 11:05:10.489: INFO: stdout: "update-demo-nautilus-ccsw6 update-demo-nautilus-fkx78 " Mar 17 11:05:10.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccsw6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:10.829: INFO: stderr: "" Mar 17 11:05:10.829: INFO: stdout: "" Mar 17 11:05:10.829: INFO: update-demo-nautilus-ccsw6 is created but not running Mar 17 11:05:15.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:17.174: INFO: stderr: "" Mar 17 11:05:17.174: INFO: stdout: "update-demo-nautilus-ccsw6 update-demo-nautilus-fkx78 " Mar 17 11:05:17.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccsw6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:17.425: INFO: stderr: "" Mar 17 11:05:17.425: INFO: stdout: "" Mar 17 11:05:17.425: INFO: update-demo-nautilus-ccsw6 is created but not running Mar 17 11:05:22.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:22.812: INFO: stderr: "" Mar 17 11:05:22.812: INFO: stdout: "update-demo-nautilus-ccsw6 update-demo-nautilus-fkx78 " Mar 17 11:05:22.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccsw6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:23.063: INFO: stderr: "" Mar 17 11:05:23.063: INFO: stdout: "true" Mar 17 11:05:23.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccsw6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:23.202: INFO: stderr: "" Mar 17 11:05:23.202: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:05:23.202: INFO: validating pod update-demo-nautilus-ccsw6 Mar 17 11:05:23.242: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:05:23.243: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:05:23.243: INFO: update-demo-nautilus-ccsw6 is verified up and running Mar 17 11:05:23.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkx78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:23.379: INFO: stderr: "" Mar 17 11:05:23.379: INFO: stdout: "true" Mar 17 11:05:23.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkx78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:23.481: INFO: stderr: "" Mar 17 11:05:23.481: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:05:23.481: INFO: validating pod update-demo-nautilus-fkx78 Mar 17 11:05:23.486: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:05:23.486: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:05:23.486: INFO: update-demo-nautilus-fkx78 is verified up and running STEP: scaling down the replication controller Mar 17 11:05:23.487: INFO: scanned /root for discovery docs: Mar 17 11:05:23.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:24.655: INFO: stderr: "" Mar 17 11:05:24.655: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 11:05:24.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:24.731: INFO: stderr: "" Mar 17 11:05:24.732: INFO: stdout: "update-demo-nautilus-ccsw6 update-demo-nautilus-fkx78 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 17 11:05:29.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:29.803: INFO: stderr: "" Mar 17 11:05:29.803: INFO: stdout: "update-demo-nautilus-fkx78 " Mar 17 11:05:29.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkx78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:29.889: INFO: stderr: "" Mar 17 11:05:29.889: INFO: stdout: "true" Mar 17 11:05:29.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkx78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:29.964: INFO: stderr: "" Mar 17 11:05:29.964: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:05:29.964: INFO: validating pod update-demo-nautilus-fkx78 Mar 17 11:05:29.967: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:05:29.967: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:05:29.967: INFO: update-demo-nautilus-fkx78 is verified up and running STEP: scaling up the replication controller Mar 17 11:05:29.969: INFO: scanned /root for discovery docs: Mar 17 11:05:29.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:31.124: INFO: stderr: "" Mar 17 11:05:31.124: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 17 11:05:31.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:31.201: INFO: stderr: "" Mar 17 11:05:31.201: INFO: stdout: "update-demo-nautilus-2cqbn update-demo-nautilus-fkx78 " Mar 17 11:05:31.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cqbn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:31.384: INFO: stderr: "" Mar 17 11:05:31.384: INFO: stdout: "" Mar 17 11:05:31.384: INFO: update-demo-nautilus-2cqbn is created but not running Mar 17 11:05:36.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:36.518: INFO: stderr: "" Mar 17 11:05:36.518: INFO: stdout: "update-demo-nautilus-2cqbn update-demo-nautilus-fkx78 " Mar 17 11:05:36.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cqbn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:36.644: INFO: stderr: "" Mar 17 11:05:36.644: INFO: stdout: "true" Mar 17 11:05:36.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cqbn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:36.740: INFO: stderr: "" Mar 17 11:05:36.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:05:36.740: INFO: validating pod update-demo-nautilus-2cqbn Mar 17 11:05:36.745: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:05:36.745: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:05:36.745: INFO: update-demo-nautilus-2cqbn is verified up and running Mar 17 11:05:36.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkx78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:36.854: INFO: stderr: "" Mar 17 11:05:36.854: INFO: stdout: "true" Mar 17 11:05:36.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkx78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:36.937: INFO: stderr: "" Mar 17 11:05:36.937: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 17 11:05:36.937: INFO: validating pod update-demo-nautilus-fkx78 Mar 17 11:05:36.941: INFO: got data: { "image": "nautilus.jpg" } Mar 17 11:05:36.941: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 17 11:05:36.941: INFO: update-demo-nautilus-fkx78 is verified up and running STEP: using delete to clean up resources Mar 17 11:05:36.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:37.360: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 17 11:05:37.360: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 17 11:05:37.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6p5gx' Mar 17 11:05:37.843: INFO: stderr: "No resources found.\n" Mar 17 11:05:37.843: INFO: stdout: "" Mar 17 11:05:37.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6p5gx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 17 11:05:38.390: INFO: stderr: "" Mar 17 11:05:38.390: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:05:38.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6p5gx" for this suite. Mar 17 11:05:47.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:05:47.502: INFO: namespace: e2e-tests-kubectl-6p5gx, resource: bindings, ignored listing per whitelist Mar 17 11:05:47.507: INFO: namespace e2e-tests-kubectl-6p5gx deletion completed in 8.728841934s • [SLOW TEST:38.084 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:05:47.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-jdwcg/configmap-test-9e31439f-48a4-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume configMaps Mar 17 11:05:47.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-jdwcg" to be "success or failure" Mar 17 11:05:47.816: INFO: Pod "pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 49.172864ms Mar 17 11:05:49.819: INFO: Pod "pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05225258s Mar 17 11:05:51.896: INFO: Pod "pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129664979s Mar 17 11:05:55.817: INFO: Pod "pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050762882s STEP: Saw pod success Mar 17 11:05:55.817: INFO: Pod "pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:05:57.327: INFO: Trying to get logs from node kube pod pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009 container env-test: STEP: delete the pod Mar 17 11:05:57.923: INFO: Waiting for pod pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009 to disappear Mar 17 11:05:57.978: INFO: Pod pod-configmaps-9e31ebbe-48a4-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:05:57.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jdwcg" for this suite. Mar 17 11:06:04.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:06:04.198: INFO: namespace: e2e-tests-configmap-jdwcg, resource: bindings, ignored listing per whitelist Mar 17 11:06:04.238: INFO: namespace e2e-tests-configmap-jdwcg deletion completed in 6.25733653s • [SLOW TEST:16.731 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:06:04.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a831737c-48a4-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume configMaps Mar 17 11:06:04.661: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-2s8j6" to be "success or failure" Mar 17 11:06:04.701: INFO: Pod "pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 40.11023ms Mar 17 11:06:06.704: INFO: Pod "pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04330921s Mar 17 11:06:08.819: INFO: Pod "pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158261767s Mar 17 11:06:12.641: INFO: Pod "pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.979782633s STEP: Saw pod success Mar 17 11:06:12.641: INFO: Pod "pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:06:13.112: INFO: Trying to get logs from node kube pod pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009 container configmap-volume-test: STEP: delete the pod Mar 17 11:06:13.378: INFO: Waiting for pod pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009 to disappear Mar 17 11:06:13.421: INFO: Pod pod-configmaps-a8446507-48a4-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:06:13.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2s8j6" for this suite. Mar 17 11:06:19.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:06:19.754: INFO: namespace: e2e-tests-configmap-2s8j6, resource: bindings, ignored listing per whitelist Mar 17 11:06:19.778: INFO: namespace e2e-tests-configmap-2s8j6 deletion completed in 6.354026753s • [SLOW TEST:15.540 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:06:19.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Mar 17 11:06:20.160: INFO: Waiting up to 5m0s for pod "var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009" in namespace "e2e-tests-var-expansion-ml6wj" to be "success or failure" Mar 17 11:06:20.184: INFO: Pod "var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 24.534124ms Mar 17 11:06:22.187: INFO: Pod "var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027235502s Mar 17 11:06:24.191: INFO: Pod "var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030993011s Mar 17 11:06:26.649: INFO: Pod "var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.488953917s STEP: Saw pod success Mar 17 11:06:26.649: INFO: Pod "var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:06:26.657: INFO: Trying to get logs from node kube pod var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009 container dapi-container: STEP: delete the pod Mar 17 11:06:27.317: INFO: Waiting for pod var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009 to disappear Mar 17 11:06:27.352: INFO: Pod var-expansion-b17e4f16-48a4-11e9-bf64-0242ac110009 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:06:27.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-ml6wj" for this suite. Mar 17 11:06:37.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:06:37.224: INFO: namespace: e2e-tests-var-expansion-ml6wj, resource: bindings, ignored listing per whitelist Mar 17 11:06:37.436: INFO: namespace e2e-tests-var-expansion-ml6wj deletion completed in 10.079152114s • [SLOW TEST:17.658 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:06:37.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-bc10bf90-48a4-11e9-bf64-0242ac110009 STEP: Creating secret with name secret-projected-all-test-volume-bc10bf77-48a4-11e9-bf64-0242ac110009 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 17 11:06:37.894: INFO: Waiting up to 5m0s for pod "projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-627sn" to be "success or failure" Mar 17 11:06:37.916: INFO: Pod "projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 21.541989ms Mar 17 11:06:41.376: INFO: Pod "projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.482266312s Mar 17 11:06:43.379: INFO: Pod "projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.484980567s Mar 17 11:06:45.382: INFO: Pod "projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.487694834s STEP: Saw pod success Mar 17 11:06:45.382: INFO: Pod "projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:06:45.385: INFO: Trying to get logs from node kube pod projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009 container projected-all-volume-test: STEP: delete the pod Mar 17 11:06:45.749: INFO: Waiting for pod projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009 to disappear Mar 17 11:06:46.065: INFO: Pod projected-volume-bc10bf35-48a4-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:06:46.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-627sn" for this suite. Mar 17 11:06:52.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:06:52.370: INFO: namespace: e2e-tests-projected-627sn, resource: bindings, ignored listing per whitelist Mar 17 11:06:52.450: INFO: namespace e2e-tests-projected-627sn deletion completed in 6.312256719s • [SLOW TEST:15.015 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:06:52.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-c6357ddf-48a4-11e9-bf64-0242ac110009 STEP: Creating configMap with name cm-test-opt-upd-c6357e3f-48a4-11e9-bf64-0242ac110009 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c6357ddf-48a4-11e9-bf64-0242ac110009 STEP: Updating configmap cm-test-opt-upd-c6357e3f-48a4-11e9-bf64-0242ac110009 STEP: Creating configMap with name cm-test-opt-create-c6357e64-48a4-11e9-bf64-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:07:07.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-87j2x" for this suite. Mar 17 11:07:31.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:07:31.270: INFO: namespace: e2e-tests-configmap-87j2x, resource: bindings, ignored listing per whitelist Mar 17 11:07:31.285: INFO: namespace e2e-tests-configmap-87j2x deletion completed in 24.09779886s • [SLOW TEST:38.835 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:07:31.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:07:31.612: INFO: Creating deployment "test-recreate-deployment" Mar 17 11:07:31.616: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 17 11:07:31.657: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 17 11:07:33.662: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 17 11:07:33.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688417651, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688417651, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688417652, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688417651, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5dfdcc846d\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 17 11:07:35.667: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 17 11:07:35.673: INFO: Updating deployment test-recreate-deployment Mar 17 11:07:35.673: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 17 11:07:36.605: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-w48tp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w48tp/deployments/test-recreate-deployment,UID:dc1ca914-48a4-11e9-a072-fa163e921bae,ResourceVersion:1285195,Generation:2,CreationTimestamp:2019-03-17 11:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-03-17 11:07:36 +0000 UTC 2019-03-17 11:07:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-03-17 11:07:36 +0000 UTC 2019-03-17 11:07:31 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-697fbf54bf" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 17 11:07:36.635: INFO: New ReplicaSet "test-recreate-deployment-697fbf54bf" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-697fbf54bf,GenerateName:,Namespace:e2e-tests-deployment-w48tp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w48tp/replicasets/test-recreate-deployment-697fbf54bf,UID:deac1149-48a4-11e9-a072-fa163e921bae,ResourceVersion:1285192,Generation:1,CreationTimestamp:2019-03-17 11:07:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment dc1ca914-48a4-11e9-a072-fa163e921bae 0xc0014d04d7 0xc0014d04d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:07:36.635: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 17 11:07:36.635: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5dfdcc846d,GenerateName:,Namespace:e2e-tests-deployment-w48tp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w48tp/replicasets/test-recreate-deployment-5dfdcc846d,UID:dc23b146-48a4-11e9-a072-fa163e921bae,ResourceVersion:1285183,Generation:2,CreationTimestamp:2019-03-17 11:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment dc1ca914-48a4-11e9-a072-fa163e921bae 0xc0014d0267 0xc0014d0268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:07:36.639: INFO: Pod "test-recreate-deployment-697fbf54bf-vfgsc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-697fbf54bf-vfgsc,GenerateName:test-recreate-deployment-697fbf54bf-,Namespace:e2e-tests-deployment-w48tp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w48tp/pods/test-recreate-deployment-697fbf54bf-vfgsc,UID:dead266a-48a4-11e9-a072-fa163e921bae,ResourceVersion:1285197,Generation:0,CreationTimestamp:2019-03-17 11:07:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-697fbf54bf deac1149-48a4-11e9-a072-fa163e921bae 0xc0014d15f7 0xc0014d15f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-trz9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-trz9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-trz9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014d1670} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014d1690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:07:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:07:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:07:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:07:36 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:,StartTime:2019-03-17 11:07:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:07:36.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-w48tp" for this suite. Mar 17 11:07:46.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:07:47.050: INFO: namespace: e2e-tests-deployment-w48tp, resource: bindings, ignored listing per whitelist Mar 17 11:07:47.055: INFO: namespace e2e-tests-deployment-w48tp deletion completed in 10.414266454s • [SLOW TEST:15.770 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:07:47.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-4jqdr I0317 11:07:47.569130 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-4jqdr, replica count: 1 I0317 11:07:48.619511 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 11:07:49.619688 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 11:07:50.619833 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 11:07:51.619987 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 11:07:52.620147 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 11:07:53.620318 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 11:07:54.620524 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 11:07:55.620666 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0317 11:07:56.620837 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 17 11:07:56.875: INFO: Created: latency-svc-p29tt Mar 17 11:07:56.924: INFO: Got endpoints: latency-svc-p29tt [203.722792ms] Mar 17 11:07:57.243: INFO: Created: latency-svc-6xrmd Mar 17 11:07:57.264: INFO: Got endpoints: latency-svc-6xrmd [339.924792ms] Mar 17 11:07:57.502: INFO: Created: latency-svc-tjx58 Mar 17 11:07:57.506: INFO: Got endpoints: latency-svc-tjx58 [581.041634ms] Mar 17 11:07:57.768: INFO: Created: latency-svc-d2jkw Mar 17 11:07:57.771: INFO: Got endpoints: latency-svc-d2jkw [846.974675ms] Mar 17 11:07:58.092: INFO: Created: latency-svc-zcxg9 Mar 17 11:07:58.094: INFO: Got endpoints: latency-svc-zcxg9 [1.169250857s] Mar 17 11:07:58.485: INFO: Created: latency-svc-rk9sz Mar 17 11:07:58.491: INFO: Got endpoints: latency-svc-rk9sz [1.566096612s] Mar 17 11:07:58.902: INFO: Created: latency-svc-4zqbz Mar 17 11:07:58.911: INFO: Got endpoints: latency-svc-4zqbz [1.986316432s] Mar 17 11:07:59.186: INFO: Created: latency-svc-jsvrc Mar 17 11:07:59.205: INFO: Got endpoints: latency-svc-jsvrc [2.280388s] Mar 17 11:07:59.402: INFO: Created: latency-svc-gjnlf Mar 17 11:07:59.406: INFO: Got endpoints: latency-svc-gjnlf [2.481594005s] Mar 17 11:07:59.751: INFO: Created: latency-svc-pdn6t Mar 17 11:07:59.757: INFO: Got endpoints: latency-svc-pdn6t [2.831933102s] Mar 17 11:08:00.028: INFO: Created: latency-svc-phhcn Mar 17 11:08:00.028: INFO: Got endpoints: latency-svc-phhcn [3.103956575s] Mar 17 11:08:00.284: INFO: Created: latency-svc-sg67t Mar 17 11:08:00.285: INFO: Got endpoints: latency-svc-sg67t [3.360843662s] Mar 17 11:08:00.654: INFO: Created: latency-svc-j6rnm Mar 17 11:08:00.723: INFO: Got endpoints: latency-svc-j6rnm [3.798419442s] Mar 17 11:08:01.139: INFO: Created: latency-svc-mzls6 Mar 17 11:08:01.143: INFO: Got endpoints: latency-svc-mzls6 [4.21873219s] Mar 17 11:08:01.599: INFO: Created: latency-svc-qljp6 Mar 17 11:08:01.607: INFO: Got endpoints: latency-svc-qljp6 [4.682439514s] Mar 17 11:08:01.958: INFO: Created: latency-svc-c7f5p Mar 17 11:08:01.973: INFO: Got endpoints: latency-svc-c7f5p [5.048117298s] Mar 17 11:08:02.485: INFO: Created: latency-svc-ndmg5 Mar 17 11:08:02.935: INFO: Got endpoints: latency-svc-ndmg5 [5.670847639s] Mar 17 11:08:03.456: INFO: Created: latency-svc-v62vh Mar 17 11:08:04.593: INFO: Got endpoints: latency-svc-v62vh [7.08776316s] Mar 17 11:08:04.600: INFO: Created: latency-svc-jmnvj Mar 17 11:08:04.635: INFO: Got endpoints: latency-svc-jmnvj [6.863176745s] Mar 17 11:08:05.162: INFO: Created: latency-svc-xgxtw Mar 17 11:08:05.222: INFO: Got endpoints: latency-svc-xgxtw [7.128472127s] Mar 17 11:08:05.706: INFO: Created: latency-svc-dtd2k Mar 17 11:08:05.720: INFO: Got endpoints: latency-svc-dtd2k [7.229184697s] Mar 17 11:08:06.224: INFO: Created: latency-svc-rwph4 Mar 17 11:08:06.230: INFO: Got endpoints: latency-svc-rwph4 [7.319388435s] Mar 17 11:08:06.679: INFO: Created: latency-svc-wlfb2 Mar 17 11:08:06.719: INFO: Got endpoints: latency-svc-wlfb2 [7.5135925s] Mar 17 11:08:07.055: INFO: Created: latency-svc-gs22q Mar 17 11:08:07.405: INFO: Created: latency-svc-ll2z9 Mar 17 11:08:07.412: INFO: Got endpoints: latency-svc-gs22q [8.00621768s] Mar 17 11:08:07.426: INFO: Got endpoints: latency-svc-ll2z9 [7.669382941s] Mar 17 11:08:07.739: INFO: Created: latency-svc-mn9h4 Mar 17 11:08:07.739: INFO: Got endpoints: latency-svc-mn9h4 [7.710348283s] Mar 17 11:08:07.979: INFO: Created: latency-svc-qsm8k Mar 17 11:08:08.018: INFO: Got endpoints: latency-svc-qsm8k [7.732345568s] Mar 17 11:08:08.231: INFO: Created: latency-svc-gnznh Mar 17 11:08:08.237: INFO: Got endpoints: latency-svc-gnznh [7.514539132s] Mar 17 11:08:08.530: INFO: Created: latency-svc-kzs6h Mar 17 11:08:08.536: INFO: Got endpoints: latency-svc-kzs6h [7.392567904s] Mar 17 11:08:08.810: INFO: Created: latency-svc-pjd2t Mar 17 11:08:08.811: INFO: Got endpoints: latency-svc-pjd2t [7.203326778s] Mar 17 11:08:09.033: INFO: Created: latency-svc-kmxgc Mar 17 11:08:09.039: INFO: Got endpoints: latency-svc-kmxgc [7.066662432s] Mar 17 11:08:09.213: INFO: Created: latency-svc-vwlk2 Mar 17 11:08:09.214: INFO: Got endpoints: latency-svc-vwlk2 [6.27834453s] Mar 17 11:08:09.470: INFO: Created: latency-svc-wrx9s Mar 17 11:08:09.472: INFO: Got endpoints: latency-svc-wrx9s [4.878274913s] Mar 17 11:08:09.769: INFO: Created: latency-svc-rh2mq Mar 17 11:08:09.771: INFO: Got endpoints: latency-svc-rh2mq [5.136616352s] Mar 17 11:08:09.837: INFO: Created: latency-svc-pg5fp Mar 17 11:08:09.840: INFO: Got endpoints: latency-svc-pg5fp [4.617787633s] Mar 17 11:08:10.025: INFO: Created: latency-svc-x57z8 Mar 17 11:08:10.031: INFO: Got endpoints: latency-svc-x57z8 [4.310957583s] Mar 17 11:08:10.300: INFO: Created: latency-svc-lcj99 Mar 17 11:08:10.305: INFO: Got endpoints: latency-svc-lcj99 [4.074832456s] Mar 17 11:08:10.611: INFO: Created: latency-svc-k7r8q Mar 17 11:08:10.613: INFO: Got endpoints: latency-svc-k7r8q [3.894463744s] Mar 17 11:08:10.716: INFO: Created: latency-svc-dgm82 Mar 17 11:08:11.031: INFO: Got endpoints: latency-svc-dgm82 [3.618423562s] Mar 17 11:08:11.041: INFO: Created: latency-svc-2st74 Mar 17 11:08:11.051: INFO: Got endpoints: latency-svc-2st74 [3.624739979s] Mar 17 11:08:11.391: INFO: Created: latency-svc-nrltl Mar 17 11:08:11.406: INFO: Got endpoints: latency-svc-nrltl [3.66717033s] Mar 17 11:08:11.478: INFO: Created: latency-svc-v7w9n Mar 17 11:08:11.825: INFO: Got endpoints: latency-svc-v7w9n [3.806836218s] Mar 17 11:08:11.830: INFO: Created: latency-svc-5xbk7 Mar 17 11:08:11.850: INFO: Got endpoints: latency-svc-5xbk7 [3.61291387s] Mar 17 11:08:12.439: INFO: Created: latency-svc-p8cvq Mar 17 11:08:12.450: INFO: Got endpoints: latency-svc-p8cvq [3.914427212s] Mar 17 11:08:12.937: INFO: Created: latency-svc-jf8bc Mar 17 11:08:12.956: INFO: Got endpoints: latency-svc-jf8bc [4.145193686s] Mar 17 11:08:13.234: INFO: Created: latency-svc-6w4nm Mar 17 11:08:13.242: INFO: Got endpoints: latency-svc-6w4nm [4.20251035s] Mar 17 11:08:13.895: INFO: Created: latency-svc-6fv2v Mar 17 11:08:13.904: INFO: Got endpoints: latency-svc-6fv2v [4.690352037s] Mar 17 11:08:14.450: INFO: Created: latency-svc-8pzcs Mar 17 11:08:14.457: INFO: Got endpoints: latency-svc-8pzcs [4.984917754s] Mar 17 11:08:15.238: INFO: Created: latency-svc-w5l6v Mar 17 11:08:17.477: INFO: Got endpoints: latency-svc-w5l6v [7.705290494s] Mar 17 11:08:22.551: INFO: Created: latency-svc-9mq89 Mar 17 11:08:22.847: INFO: Created: latency-svc-79wb2 Mar 17 11:08:23.115: INFO: Created: latency-svc-95gvv Mar 17 11:08:23.143: INFO: Got endpoints: latency-svc-9mq89 [13.302881333s] Mar 17 11:08:23.520: INFO: Got endpoints: latency-svc-79wb2 [13.489714636s] Mar 17 11:08:23.524: INFO: Created: latency-svc-tvtdq Mar 17 11:08:23.550: INFO: Got endpoints: latency-svc-tvtdq [12.936712369s] Mar 17 11:08:23.552: INFO: Got endpoints: latency-svc-95gvv [13.246660779s] Mar 17 11:08:23.811: INFO: Created: latency-svc-6zpgs Mar 17 11:08:23.819: INFO: Got endpoints: latency-svc-6zpgs [12.788116497s] Mar 17 11:08:24.029: INFO: Created: latency-svc-5qzns Mar 17 11:08:24.045: INFO: Got endpoints: latency-svc-5qzns [12.994350523s] Mar 17 11:08:24.126: INFO: Created: latency-svc-wtw8k Mar 17 11:08:24.321: INFO: Got endpoints: latency-svc-wtw8k [12.914846763s] Mar 17 11:08:24.333: INFO: Created: latency-svc-8spt2 Mar 17 11:08:24.351: INFO: Got endpoints: latency-svc-8spt2 [12.526433043s] Mar 17 11:08:24.584: INFO: Created: latency-svc-m2ww6 Mar 17 11:08:24.589: INFO: Got endpoints: latency-svc-m2ww6 [12.738406376s] Mar 17 11:08:24.997: INFO: Created: latency-svc-jnph6 Mar 17 11:08:25.009: INFO: Got endpoints: latency-svc-jnph6 [12.558156027s] Mar 17 11:08:25.247: INFO: Created: latency-svc-8ppd6 Mar 17 11:08:25.263: INFO: Got endpoints: latency-svc-8ppd6 [12.307532992s] Mar 17 11:08:25.518: INFO: Created: latency-svc-49rb8 Mar 17 11:08:25.590: INFO: Got endpoints: latency-svc-49rb8 [12.3476904s] Mar 17 11:08:25.788: INFO: Created: latency-svc-lhkb8 Mar 17 11:08:25.797: INFO: Got endpoints: latency-svc-lhkb8 [11.892614494s] Mar 17 11:08:26.047: INFO: Created: latency-svc-47x9w Mar 17 11:08:26.051: INFO: Got endpoints: latency-svc-47x9w [11.59392043s] Mar 17 11:08:26.283: INFO: Created: latency-svc-zlh9s Mar 17 11:08:26.286: INFO: Got endpoints: latency-svc-zlh9s [489.113404ms] Mar 17 11:08:26.530: INFO: Created: latency-svc-989xw Mar 17 11:08:26.533: INFO: Got endpoints: latency-svc-989xw [9.056739858s] Mar 17 11:08:26.707: INFO: Created: latency-svc-tgnc6 Mar 17 11:08:26.711: INFO: Got endpoints: latency-svc-tgnc6 [3.567933199s] Mar 17 11:08:26.976: INFO: Created: latency-svc-r69qf Mar 17 11:08:26.979: INFO: Got endpoints: latency-svc-r69qf [3.458408594s] Mar 17 11:08:27.236: INFO: Created: latency-svc-wp4sf Mar 17 11:08:27.261: INFO: Got endpoints: latency-svc-wp4sf [3.711587925s] Mar 17 11:08:27.508: INFO: Created: latency-svc-5rpp4 Mar 17 11:08:27.546: INFO: Got endpoints: latency-svc-5rpp4 [3.993833475s] Mar 17 11:08:27.775: INFO: Created: latency-svc-tm7pm Mar 17 11:08:27.788: INFO: Got endpoints: latency-svc-tm7pm [3.968646145s] Mar 17 11:08:28.008: INFO: Created: latency-svc-lkwzh Mar 17 11:08:28.008: INFO: Got endpoints: latency-svc-lkwzh [3.963184734s] Mar 17 11:08:28.214: INFO: Created: latency-svc-4zxhj Mar 17 11:08:28.320: INFO: Created: latency-svc-zpffm Mar 17 11:08:28.548: INFO: Got endpoints: latency-svc-4zxhj [4.227546048s] Mar 17 11:08:28.550: INFO: Got endpoints: latency-svc-zpffm [4.199186902s] Mar 17 11:08:28.559: INFO: Created: latency-svc-blcvf Mar 17 11:08:28.580: INFO: Got endpoints: latency-svc-blcvf [3.990907839s] Mar 17 11:08:28.798: INFO: Created: latency-svc-gpztw Mar 17 11:08:28.811: INFO: Got endpoints: latency-svc-gpztw [3.802306247s] Mar 17 11:08:29.150: INFO: Created: latency-svc-dvhwp Mar 17 11:08:29.150: INFO: Got endpoints: latency-svc-dvhwp [3.886185495s] Mar 17 11:08:29.441: INFO: Created: latency-svc-6cgcs Mar 17 11:08:29.466: INFO: Got endpoints: latency-svc-6cgcs [3.876446973s] Mar 17 11:08:29.725: INFO: Created: latency-svc-4sx7r Mar 17 11:08:29.757: INFO: Got endpoints: latency-svc-4sx7r [3.706279442s] Mar 17 11:08:30.030: INFO: Created: latency-svc-m9vt8 Mar 17 11:08:30.035: INFO: Got endpoints: latency-svc-m9vt8 [3.749625421s] Mar 17 11:08:30.332: INFO: Created: latency-svc-gd8md Mar 17 11:08:30.335: INFO: Got endpoints: latency-svc-gd8md [3.801672049s] Mar 17 11:08:30.535: INFO: Created: latency-svc-8p84p Mar 17 11:08:30.537: INFO: Got endpoints: latency-svc-8p84p [3.826070768s] Mar 17 11:08:30.828: INFO: Created: latency-svc-k6jr7 Mar 17 11:08:30.841: INFO: Got endpoints: latency-svc-k6jr7 [3.861822231s] Mar 17 11:08:31.055: INFO: Created: latency-svc-wn8zj Mar 17 11:08:31.064: INFO: Got endpoints: latency-svc-wn8zj [3.802263984s] Mar 17 11:08:31.125: INFO: Created: latency-svc-r27tb Mar 17 11:08:31.133: INFO: Got endpoints: latency-svc-r27tb [3.587118395s] Mar 17 11:08:31.381: INFO: Created: latency-svc-smxzb Mar 17 11:08:31.425: INFO: Got endpoints: latency-svc-smxzb [3.637227859s] Mar 17 11:08:31.643: INFO: Created: latency-svc-8qg2b Mar 17 11:08:31.690: INFO: Got endpoints: latency-svc-8qg2b [3.681572484s] Mar 17 11:08:31.951: INFO: Created: latency-svc-q48d8 Mar 17 11:08:31.964: INFO: Got endpoints: latency-svc-q48d8 [3.415055581s] Mar 17 11:08:32.186: INFO: Created: latency-svc-h8nzp Mar 17 11:08:32.236: INFO: Got endpoints: latency-svc-h8nzp [3.685358124s] Mar 17 11:08:32.439: INFO: Created: latency-svc-b87sh Mar 17 11:08:32.457: INFO: Got endpoints: latency-svc-b87sh [3.876838475s] Mar 17 11:08:32.713: INFO: Created: latency-svc-bp5w8 Mar 17 11:08:32.733: INFO: Got endpoints: latency-svc-bp5w8 [3.92171414s] Mar 17 11:08:32.966: INFO: Created: latency-svc-jvn5s Mar 17 11:08:32.974: INFO: Got endpoints: latency-svc-jvn5s [3.824768703s] Mar 17 11:08:33.261: INFO: Created: latency-svc-ln97b Mar 17 11:08:33.651: INFO: Got endpoints: latency-svc-ln97b [4.184524813s] Mar 17 11:08:33.655: INFO: Created: latency-svc-mkb4r Mar 17 11:08:34.280: INFO: Created: latency-svc-qlvrk Mar 17 11:08:34.288: INFO: Got endpoints: latency-svc-mkb4r [4.531008916s] Mar 17 11:08:35.438: INFO: Got endpoints: latency-svc-qlvrk [5.402441187s] Mar 17 11:08:35.453: INFO: Created: latency-svc-wqwlz Mar 17 11:08:35.507: INFO: Got endpoints: latency-svc-wqwlz [5.172086333s] Mar 17 11:08:35.769: INFO: Created: latency-svc-88vf4 Mar 17 11:08:35.789: INFO: Got endpoints: latency-svc-88vf4 [5.251590817s] Mar 17 11:08:36.051: INFO: Created: latency-svc-jtrdm Mar 17 11:08:36.093: INFO: Got endpoints: latency-svc-jtrdm [5.252426986s] Mar 17 11:08:36.341: INFO: Created: latency-svc-fk6lr Mar 17 11:08:36.559: INFO: Got endpoints: latency-svc-fk6lr [5.494981016s] Mar 17 11:08:36.564: INFO: Created: latency-svc-f7vp6 Mar 17 11:08:36.590: INFO: Got endpoints: latency-svc-f7vp6 [5.456800736s] Mar 17 11:08:36.784: INFO: Created: latency-svc-rfvgf Mar 17 11:08:36.847: INFO: Got endpoints: latency-svc-rfvgf [5.421774755s] Mar 17 11:08:37.065: INFO: Created: latency-svc-8w7dt Mar 17 11:08:37.075: INFO: Got endpoints: latency-svc-8w7dt [5.384728573s] Mar 17 11:08:37.314: INFO: Created: latency-svc-xc76c Mar 17 11:08:37.323: INFO: Got endpoints: latency-svc-xc76c [5.359819494s] Mar 17 11:08:37.872: INFO: Created: latency-svc-n6dfp Mar 17 11:08:37.890: INFO: Got endpoints: latency-svc-n6dfp [5.654286038s] Mar 17 11:08:38.115: INFO: Created: latency-svc-j2hzl Mar 17 11:08:38.523: INFO: Got endpoints: latency-svc-j2hzl [6.066644924s] Mar 17 11:08:38.531: INFO: Created: latency-svc-ngh5s Mar 17 11:08:38.545: INFO: Got endpoints: latency-svc-ngh5s [5.812704075s] Mar 17 11:08:38.784: INFO: Created: latency-svc-tb2nn Mar 17 11:08:38.806: INFO: Got endpoints: latency-svc-tb2nn [5.831852439s] Mar 17 11:08:39.030: INFO: Created: latency-svc-ptk86 Mar 17 11:08:39.032: INFO: Got endpoints: latency-svc-ptk86 [5.380882532s] Mar 17 11:08:39.238: INFO: Created: latency-svc-nqw6g Mar 17 11:08:39.249: INFO: Got endpoints: latency-svc-nqw6g [4.961348251s] Mar 17 11:08:41.201: INFO: Created: latency-svc-qsmx2 Mar 17 11:08:41.202: INFO: Got endpoints: latency-svc-qsmx2 [5.764119084s] Mar 17 11:08:41.297: INFO: Created: latency-svc-25gcw Mar 17 11:08:41.602: INFO: Got endpoints: latency-svc-25gcw [6.094716743s] Mar 17 11:08:41.639: INFO: Created: latency-svc-44j62 Mar 17 11:08:41.655: INFO: Got endpoints: latency-svc-44j62 [5.866137916s] Mar 17 11:08:42.104: INFO: Created: latency-svc-nrjsf Mar 17 11:08:42.109: INFO: Got endpoints: latency-svc-nrjsf [6.015429174s] Mar 17 11:08:42.709: INFO: Created: latency-svc-c7c4l Mar 17 11:08:42.731: INFO: Got endpoints: latency-svc-c7c4l [6.171907821s] Mar 17 11:08:43.063: INFO: Created: latency-svc-vp5x7 Mar 17 11:08:43.071: INFO: Got endpoints: latency-svc-vp5x7 [6.480992054s] Mar 17 11:08:43.499: INFO: Created: latency-svc-88kqs Mar 17 11:08:43.499: INFO: Got endpoints: latency-svc-88kqs [6.651925607s] Mar 17 11:08:43.912: INFO: Created: latency-svc-r2tsj Mar 17 11:08:44.333: INFO: Created: latency-svc-kbfst Mar 17 11:08:44.352: INFO: Got endpoints: latency-svc-r2tsj [7.277623859s] Mar 17 11:08:44.362: INFO: Got endpoints: latency-svc-kbfst [7.038664963s] Mar 17 11:08:44.934: INFO: Created: latency-svc-vbv48 Mar 17 11:08:44.934: INFO: Got endpoints: latency-svc-vbv48 [7.04384789s] Mar 17 11:08:45.280: INFO: Created: latency-svc-xlpb7 Mar 17 11:08:45.289: INFO: Got endpoints: latency-svc-xlpb7 [6.765439602s] Mar 17 11:08:45.833: INFO: Created: latency-svc-7tfvf Mar 17 11:08:45.844: INFO: Got endpoints: latency-svc-7tfvf [7.298631323s] Mar 17 11:08:46.276: INFO: Created: latency-svc-nqhv7 Mar 17 11:08:46.304: INFO: Got endpoints: latency-svc-nqhv7 [7.498016793s] Mar 17 11:08:46.594: INFO: Created: latency-svc-p94sd Mar 17 11:08:46.597: INFO: Got endpoints: latency-svc-p94sd [7.565591405s] Mar 17 11:08:46.883: INFO: Created: latency-svc-tbmbw Mar 17 11:08:46.886: INFO: Got endpoints: latency-svc-tbmbw [7.636866504s] Mar 17 11:08:47.409: INFO: Created: latency-svc-r9d5f Mar 17 11:08:47.416: INFO: Got endpoints: latency-svc-r9d5f [6.213946738s] Mar 17 11:08:48.178: INFO: Created: latency-svc-jh7s6 Mar 17 11:08:48.187: INFO: Got endpoints: latency-svc-jh7s6 [6.584738226s] Mar 17 11:08:48.631: INFO: Created: latency-svc-55hcz Mar 17 11:08:48.637: INFO: Got endpoints: latency-svc-55hcz [6.982077302s] Mar 17 11:08:48.888: INFO: Created: latency-svc-jccvw Mar 17 11:08:48.912: INFO: Got endpoints: latency-svc-jccvw [6.803317744s] Mar 17 11:08:49.348: INFO: Created: latency-svc-v89jv Mar 17 11:08:49.348: INFO: Got endpoints: latency-svc-v89jv [6.617021252s] Mar 17 11:08:49.908: INFO: Created: latency-svc-5dxc9 Mar 17 11:08:49.924: INFO: Got endpoints: latency-svc-5dxc9 [6.853426903s] Mar 17 11:08:50.371: INFO: Created: latency-svc-6gbmx Mar 17 11:08:50.956: INFO: Got endpoints: latency-svc-6gbmx [7.457174925s] Mar 17 11:08:50.961: INFO: Created: latency-svc-ms9fr Mar 17 11:08:51.702: INFO: Got endpoints: latency-svc-ms9fr [7.349779375s] Mar 17 11:08:52.253: INFO: Created: latency-svc-whr6z Mar 17 11:08:52.256: INFO: Got endpoints: latency-svc-whr6z [7.893829643s] Mar 17 11:08:54.280: INFO: Created: latency-svc-jj2cp Mar 17 11:08:54.290: INFO: Got endpoints: latency-svc-jj2cp [9.356152913s] Mar 17 11:08:55.159: INFO: Created: latency-svc-7kgxg Mar 17 11:08:55.700: INFO: Created: latency-svc-zd9x7 Mar 17 11:08:55.749: INFO: Got endpoints: latency-svc-zd9x7 [9.905127488s] Mar 17 11:08:55.755: INFO: Got endpoints: latency-svc-7kgxg [10.466564764s] Mar 17 11:08:56.396: INFO: Created: latency-svc-bklfz Mar 17 11:08:56.408: INFO: Got endpoints: latency-svc-bklfz [10.103886127s] Mar 17 11:08:57.449: INFO: Created: latency-svc-7dtrm Mar 17 11:08:57.459: INFO: Got endpoints: latency-svc-7dtrm [10.861982264s] Mar 17 11:08:58.057: INFO: Created: latency-svc-4xlvw Mar 17 11:08:58.057: INFO: Got endpoints: latency-svc-4xlvw [11.170866293s] Mar 17 11:09:00.675: INFO: Created: latency-svc-6r6td Mar 17 11:09:00.675: INFO: Got endpoints: latency-svc-6r6td [13.25879752s] Mar 17 11:09:01.185: INFO: Created: latency-svc-6vv98 Mar 17 11:09:01.229: INFO: Got endpoints: latency-svc-6vv98 [13.041901302s] Mar 17 11:09:01.659: INFO: Created: latency-svc-vgwlq Mar 17 11:09:01.687: INFO: Got endpoints: latency-svc-vgwlq [13.049434957s] Mar 17 11:09:02.087: INFO: Created: latency-svc-fsx6m Mar 17 11:09:02.089: INFO: Got endpoints: latency-svc-fsx6m [13.176494499s] Mar 17 11:09:02.391: INFO: Created: latency-svc-ggfr6 Mar 17 11:09:02.399: INFO: Got endpoints: latency-svc-ggfr6 [13.051698s] Mar 17 11:09:02.863: INFO: Created: latency-svc-svpws Mar 17 11:09:02.870: INFO: Got endpoints: latency-svc-svpws [12.94552685s] Mar 17 11:09:03.230: INFO: Created: latency-svc-kmfqx Mar 17 11:09:03.325: INFO: Got endpoints: latency-svc-kmfqx [12.369421372s] Mar 17 11:09:03.637: INFO: Created: latency-svc-ffkqd Mar 17 11:09:03.668: INFO: Got endpoints: latency-svc-ffkqd [11.965963488s] Mar 17 11:09:04.079: INFO: Created: latency-svc-2kddg Mar 17 11:09:04.085: INFO: Got endpoints: latency-svc-2kddg [11.829089228s] Mar 17 11:09:04.468: INFO: Created: latency-svc-8c62z Mar 17 11:09:04.471: INFO: Got endpoints: latency-svc-8c62z [10.180713968s] Mar 17 11:09:04.702: INFO: Created: latency-svc-8n25f Mar 17 11:09:04.739: INFO: Got endpoints: latency-svc-8n25f [8.990028384s] Mar 17 11:09:04.954: INFO: Created: latency-svc-swk9z Mar 17 11:09:04.968: INFO: Got endpoints: latency-svc-swk9z [9.213145778s] Mar 17 11:09:05.215: INFO: Created: latency-svc-cpx6c Mar 17 11:09:05.215: INFO: Got endpoints: latency-svc-cpx6c [8.807092078s] Mar 17 11:09:05.603: INFO: Created: latency-svc-bj79t Mar 17 11:09:05.625: INFO: Got endpoints: latency-svc-bj79t [8.165881371s] Mar 17 11:09:05.898: INFO: Created: latency-svc-t4xp9 Mar 17 11:09:06.182: INFO: Created: latency-svc-zjbjt Mar 17 11:09:06.199: INFO: Got endpoints: latency-svc-t4xp9 [8.142180754s] Mar 17 11:09:06.379: INFO: Created: latency-svc-2mc9h Mar 17 11:09:06.386: INFO: Got endpoints: latency-svc-zjbjt [5.711005077s] Mar 17 11:09:06.410: INFO: Got endpoints: latency-svc-2mc9h [5.181459288s] Mar 17 11:09:06.710: INFO: Created: latency-svc-8l74l Mar 17 11:09:06.712: INFO: Got endpoints: latency-svc-8l74l [5.024901842s] Mar 17 11:09:07.018: INFO: Created: latency-svc-khzlq Mar 17 11:09:07.034: INFO: Got endpoints: latency-svc-khzlq [4.945194279s] Mar 17 11:09:07.321: INFO: Created: latency-svc-55gzd Mar 17 11:09:07.560: INFO: Got endpoints: latency-svc-55gzd [5.160240114s] Mar 17 11:09:07.582: INFO: Created: latency-svc-szpcd Mar 17 11:09:07.842: INFO: Created: latency-svc-8s7j8 Mar 17 11:09:07.842: INFO: Got endpoints: latency-svc-szpcd [4.972193018s] Mar 17 11:09:07.860: INFO: Got endpoints: latency-svc-8s7j8 [4.53485404s] Mar 17 11:09:08.153: INFO: Created: latency-svc-4lqzj Mar 17 11:09:08.184: INFO: Got endpoints: latency-svc-4lqzj [4.515772209s] Mar 17 11:09:08.421: INFO: Created: latency-svc-9tjc9 Mar 17 11:09:08.430: INFO: Got endpoints: latency-svc-9tjc9 [4.345107914s] Mar 17 11:09:08.695: INFO: Created: latency-svc-n58db Mar 17 11:09:08.697: INFO: Got endpoints: latency-svc-n58db [4.22616522s] Mar 17 11:09:08.933: INFO: Created: latency-svc-ngk68 Mar 17 11:09:08.936: INFO: Got endpoints: latency-svc-ngk68 [4.197081642s] Mar 17 11:09:09.147: INFO: Created: latency-svc-dgpxl Mar 17 11:09:09.162: INFO: Got endpoints: latency-svc-dgpxl [4.192990419s] Mar 17 11:09:09.448: INFO: Created: latency-svc-4bx6s Mar 17 11:09:09.455: INFO: Got endpoints: latency-svc-4bx6s [4.239297776s] Mar 17 11:09:09.737: INFO: Created: latency-svc-ngf9j Mar 17 11:09:09.784: INFO: Got endpoints: latency-svc-ngf9j [4.158724023s] Mar 17 11:09:10.036: INFO: Created: latency-svc-fgnsg Mar 17 11:09:10.040: INFO: Got endpoints: latency-svc-fgnsg [3.840063263s] Mar 17 11:09:10.324: INFO: Created: latency-svc-gz5h4 Mar 17 11:09:10.334: INFO: Got endpoints: latency-svc-gz5h4 [3.947987526s] Mar 17 11:09:10.605: INFO: Created: latency-svc-7prdt Mar 17 11:09:10.607: INFO: Got endpoints: latency-svc-7prdt [4.196396176s] Mar 17 11:09:10.987: INFO: Created: latency-svc-hr6kp Mar 17 11:09:11.008: INFO: Got endpoints: latency-svc-hr6kp [4.296738856s] Mar 17 11:09:11.239: INFO: Created: latency-svc-spjs4 Mar 17 11:09:11.243: INFO: Got endpoints: latency-svc-spjs4 [4.209345517s] Mar 17 11:09:11.507: INFO: Created: latency-svc-jqxhz Mar 17 11:09:11.517: INFO: Got endpoints: latency-svc-jqxhz [3.956848815s] Mar 17 11:09:11.907: INFO: Created: latency-svc-lhww8 Mar 17 11:09:11.909: INFO: Got endpoints: latency-svc-lhww8 [4.066840382s] Mar 17 11:09:12.236: INFO: Created: latency-svc-mq2kh Mar 17 11:09:12.276: INFO: Got endpoints: latency-svc-mq2kh [4.415389845s] Mar 17 11:09:12.517: INFO: Created: latency-svc-grls8 Mar 17 11:09:12.521: INFO: Got endpoints: latency-svc-grls8 [4.337127959s] Mar 17 11:09:12.866: INFO: Created: latency-svc-5zwmz Mar 17 11:09:12.876: INFO: Got endpoints: latency-svc-5zwmz [4.445688672s] Mar 17 11:09:13.091: INFO: Created: latency-svc-kv74s Mar 17 11:09:13.091: INFO: Got endpoints: latency-svc-kv74s [4.393504361s] Mar 17 11:09:13.278: INFO: Created: latency-svc-xjxd7 Mar 17 11:09:13.278: INFO: Got endpoints: latency-svc-xjxd7 [4.341612577s] Mar 17 11:09:13.574: INFO: Created: latency-svc-lfgdj Mar 17 11:09:13.574: INFO: Got endpoints: latency-svc-lfgdj [4.412054961s] Mar 17 11:09:13.820: INFO: Created: latency-svc-spdq7 Mar 17 11:09:13.827: INFO: Got endpoints: latency-svc-spdq7 [4.372576638s] Mar 17 11:09:14.094: INFO: Created: latency-svc-s92rg Mar 17 11:09:16.144: INFO: Got endpoints: latency-svc-s92rg [6.360112371s] Mar 17 11:09:16.199: INFO: Created: latency-svc-bd5hk Mar 17 11:09:16.204: INFO: Got endpoints: latency-svc-bd5hk [6.164727243s] Mar 17 11:09:16.465: INFO: Created: latency-svc-9kb82 Mar 17 11:09:16.713: INFO: Got endpoints: latency-svc-9kb82 [6.379394486s] Mar 17 11:09:16.728: INFO: Created: latency-svc-6l2g2 Mar 17 11:09:16.759: INFO: Got endpoints: latency-svc-6l2g2 [6.152056106s] Mar 17 11:09:17.020: INFO: Created: latency-svc-x4mss Mar 17 11:09:17.041: INFO: Got endpoints: latency-svc-x4mss [6.032953101s] Mar 17 11:09:17.325: INFO: Created: latency-svc-cznzr Mar 17 11:09:17.335: INFO: Got endpoints: latency-svc-cznzr [6.091322382s] Mar 17 11:09:17.610: INFO: Created: latency-svc-zj4d8 Mar 17 11:09:17.632: INFO: Got endpoints: latency-svc-zj4d8 [6.115029564s] Mar 17 11:09:17.832: INFO: Created: latency-svc-r852n Mar 17 11:09:17.840: INFO: Got endpoints: latency-svc-r852n [5.93123493s] Mar 17 11:09:18.158: INFO: Created: latency-svc-24qwk Mar 17 11:09:18.164: INFO: Got endpoints: latency-svc-24qwk [5.888630292s] Mar 17 11:09:18.434: INFO: Created: latency-svc-xmmdh Mar 17 11:09:18.452: INFO: Got endpoints: latency-svc-xmmdh [5.930650337s] Mar 17 11:09:18.702: INFO: Created: latency-svc-j2cbt Mar 17 11:09:18.710: INFO: Got endpoints: latency-svc-j2cbt [5.833794635s] Mar 17 11:09:18.992: INFO: Created: latency-svc-49xwr Mar 17 11:09:19.004: INFO: Got endpoints: latency-svc-49xwr [5.913434792s] Mar 17 11:09:19.206: INFO: Created: latency-svc-rkt5p Mar 17 11:09:19.211: INFO: Got endpoints: latency-svc-rkt5p [5.932858542s] Mar 17 11:09:19.411: INFO: Created: latency-svc-ssclc Mar 17 11:09:19.414: INFO: Got endpoints: latency-svc-ssclc [5.84061103s] Mar 17 11:09:19.696: INFO: Created: latency-svc-zq5cp Mar 17 11:09:19.699: INFO: Got endpoints: latency-svc-zq5cp [5.871257066s] Mar 17 11:09:19.967: INFO: Created: latency-svc-zkrvt Mar 17 11:09:19.983: INFO: Got endpoints: latency-svc-zkrvt [3.838506026s] Mar 17 11:09:20.140: INFO: Created: latency-svc-2vd98 Mar 17 11:09:20.143: INFO: Got endpoints: latency-svc-2vd98 [3.938856553s] Mar 17 11:09:20.218: INFO: Created: latency-svc-jggzk Mar 17 11:09:20.398: INFO: Created: latency-svc-jkqwb Mar 17 11:09:20.399: INFO: Got endpoints: latency-svc-jggzk [3.685269114s] Mar 17 11:09:20.426: INFO: Got endpoints: latency-svc-jkqwb [3.667420317s] Mar 17 11:09:20.655: INFO: Created: latency-svc-fbtxb Mar 17 11:09:20.661: INFO: Got endpoints: latency-svc-fbtxb [3.619595225s] Mar 17 11:09:20.661: INFO: Latencies: [339.924792ms 489.113404ms 581.041634ms 846.974675ms 1.169250857s 1.566096612s 1.986316432s 2.280388s 2.481594005s 2.831933102s 3.103956575s 3.360843662s 3.415055581s 3.458408594s 3.567933199s 3.587118395s 3.61291387s 3.618423562s 3.619595225s 3.624739979s 3.637227859s 3.66717033s 3.667420317s 3.681572484s 3.685269114s 3.685358124s 3.706279442s 3.711587925s 3.749625421s 3.798419442s 3.801672049s 3.802263984s 3.802306247s 3.806836218s 3.824768703s 3.826070768s 3.838506026s 3.840063263s 3.861822231s 3.876446973s 3.876838475s 3.886185495s 3.894463744s 3.914427212s 3.92171414s 3.938856553s 3.947987526s 3.956848815s 3.963184734s 3.968646145s 3.990907839s 3.993833475s 4.066840382s 4.074832456s 4.145193686s 4.158724023s 4.184524813s 4.192990419s 4.196396176s 4.197081642s 4.199186902s 4.20251035s 4.209345517s 4.21873219s 4.22616522s 4.227546048s 4.239297776s 4.296738856s 4.310957583s 4.337127959s 4.341612577s 4.345107914s 4.372576638s 4.393504361s 4.412054961s 4.415389845s 4.445688672s 4.515772209s 4.531008916s 4.53485404s 4.617787633s 4.682439514s 4.690352037s 4.878274913s 4.945194279s 4.961348251s 4.972193018s 4.984917754s 5.024901842s 5.048117298s 5.136616352s 5.160240114s 5.172086333s 5.181459288s 5.251590817s 5.252426986s 5.359819494s 5.380882532s 5.384728573s 5.402441187s 5.421774755s 5.456800736s 5.494981016s 5.654286038s 5.670847639s 5.711005077s 5.764119084s 5.812704075s 5.831852439s 5.833794635s 5.84061103s 5.866137916s 5.871257066s 5.888630292s 5.913434792s 5.930650337s 5.93123493s 5.932858542s 6.015429174s 6.032953101s 6.066644924s 6.091322382s 6.094716743s 6.115029564s 6.152056106s 6.164727243s 6.171907821s 6.213946738s 6.27834453s 6.360112371s 6.379394486s 6.480992054s 6.584738226s 6.617021252s 6.651925607s 6.765439602s 6.803317744s 6.853426903s 6.863176745s 6.982077302s 7.038664963s 7.04384789s 7.066662432s 7.08776316s 7.128472127s 7.203326778s 7.229184697s 7.277623859s 7.298631323s 7.319388435s 7.349779375s 7.392567904s 7.457174925s 7.498016793s 7.5135925s 7.514539132s 7.565591405s 7.636866504s 7.669382941s 7.705290494s 7.710348283s 7.732345568s 7.893829643s 8.00621768s 8.142180754s 8.165881371s 8.807092078s 8.990028384s 9.056739858s 9.213145778s 9.356152913s 9.905127488s 10.103886127s 10.180713968s 10.466564764s 10.861982264s 11.170866293s 11.59392043s 11.829089228s 11.892614494s 11.965963488s 12.307532992s 12.3476904s 12.369421372s 12.526433043s 12.558156027s 12.738406376s 12.788116497s 12.914846763s 12.936712369s 12.94552685s 12.994350523s 13.041901302s 13.049434957s 13.051698s 13.176494499s 13.246660779s 13.25879752s 13.302881333s 13.489714636s] Mar 17 11:09:20.661: INFO: 50 %ile: 5.421774755s Mar 17 11:09:20.661: INFO: 90 %ile: 11.965963488s Mar 17 11:09:20.661: INFO: 99 %ile: 13.302881333s Mar 17 11:09:20.661: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:09:20.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-4jqdr" for this suite. Mar 17 11:10:44.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:10:44.790: INFO: namespace: e2e-tests-svc-latency-4jqdr, resource: bindings, ignored listing per whitelist Mar 17 11:10:44.800: INFO: namespace e2e-tests-svc-latency-4jqdr deletion completed in 1m24.125077764s • [SLOW TEST:177.744 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:10:44.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-cjvb STEP: Creating a pod to test atomic-volume-subpath Mar 17 11:10:44.988: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cjvb" in namespace "e2e-tests-subpath-x742l" to be "success or failure" Mar 17 11:10:45.037: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Pending", Reason="", readiness=false. Elapsed: 48.688042ms Mar 17 11:10:47.221: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232772047s Mar 17 11:10:49.225: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237260818s Mar 17 11:10:51.230: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.242090604s Mar 17 11:10:53.233: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245058091s Mar 17 11:10:55.246: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.258425079s Mar 17 11:10:57.260: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Running", Reason="", readiness=false. Elapsed: 12.271987278s Mar 17 11:10:59.270: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Running", Reason="", readiness=false. Elapsed: 14.28223611s Mar 17 11:11:01.277: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Running", Reason="", readiness=false. Elapsed: 16.288962788s Mar 17 11:11:03.280: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Running", Reason="", readiness=false. Elapsed: 18.292457946s Mar 17 11:11:05.362: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Running", Reason="", readiness=false. Elapsed: 20.374415529s Mar 17 11:11:07.366: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Running", Reason="", readiness=false. Elapsed: 22.378316146s Mar 17 11:11:09.371: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Running", Reason="", readiness=false. Elapsed: 24.38339655s Mar 17 11:11:11.374: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Running", Reason="", readiness=false. Elapsed: 26.386205598s Mar 17 11:11:13.378: INFO: Pod "pod-subpath-test-secret-cjvb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.389594063s STEP: Saw pod success Mar 17 11:11:13.378: INFO: Pod "pod-subpath-test-secret-cjvb" satisfied condition "success or failure" Mar 17 11:11:13.380: INFO: Trying to get logs from node kube pod pod-subpath-test-secret-cjvb container test-container-subpath-secret-cjvb: STEP: delete the pod Mar 17 11:11:13.864: INFO: Waiting for pod pod-subpath-test-secret-cjvb to disappear Mar 17 11:11:13.878: INFO: Pod pod-subpath-test-secret-cjvb no longer exists STEP: Deleting pod pod-subpath-test-secret-cjvb Mar 17 11:11:13.878: INFO: Deleting pod "pod-subpath-test-secret-cjvb" in namespace "e2e-tests-subpath-x742l" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:11:13.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-x742l" for this suite. Mar 17 11:11:22.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:11:22.123: INFO: namespace: e2e-tests-subpath-x742l, resource: bindings, ignored listing per whitelist Mar 17 11:11:22.132: INFO: namespace e2e-tests-subpath-x742l deletion completed in 8.130708458s • [SLOW TEST:37.333 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:11:22.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 17 11:11:28.927: INFO: Successfully updated pod "pod-update-activedeadlineseconds-65a6045d-48a5-11e9-bf64-0242ac110009" Mar 17 11:11:28.927: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-65a6045d-48a5-11e9-bf64-0242ac110009" in namespace "e2e-tests-pods-pdghx" to be "terminated due to deadline exceeded" Mar 17 11:11:29.148: INFO: Pod "pod-update-activedeadlineseconds-65a6045d-48a5-11e9-bf64-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 220.490295ms Mar 17 11:11:31.150: INFO: Pod "pod-update-activedeadlineseconds-65a6045d-48a5-11e9-bf64-0242ac110009": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.223025727s Mar 17 11:11:31.151: INFO: Pod "pod-update-activedeadlineseconds-65a6045d-48a5-11e9-bf64-0242ac110009" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:11:31.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pdghx" for this suite. Mar 17 11:11:37.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:11:37.200: INFO: namespace: e2e-tests-pods-pdghx, resource: bindings, ignored listing per whitelist Mar 17 11:11:37.252: INFO: namespace e2e-tests-pods-pdghx deletion completed in 6.099330681s • [SLOW TEST:15.119 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:11:37.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 17 11:11:48.522: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:11:49.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-mspvw" for this suite. Mar 17 11:12:15.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:12:15.458: INFO: namespace: e2e-tests-replicaset-mspvw, resource: bindings, ignored listing per whitelist Mar 17 11:12:15.470: INFO: namespace e2e-tests-replicaset-mspvw deletion completed in 26.279715336s • [SLOW TEST:38.218 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:12:15.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-8589e8f1-48a5-11e9-bf64-0242ac110009 STEP: Creating secret with name s-test-opt-upd-8589e95b-48a5-11e9-bf64-0242ac110009 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8589e8f1-48a5-11e9-bf64-0242ac110009 STEP: Updating secret s-test-opt-upd-8589e95b-48a5-11e9-bf64-0242ac110009 STEP: Creating secret with name s-test-opt-create-8589e987-48a5-11e9-bf64-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:13:37.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-sk95b" for this suite. Mar 17 11:14:01.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:14:01.927: INFO: namespace: e2e-tests-secrets-sk95b, resource: bindings, ignored listing per whitelist Mar 17 11:14:01.981: INFO: namespace e2e-tests-secrets-sk95b deletion completed in 24.230225767s • [SLOW TEST:106.511 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:14:01.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 17 11:14:02.185: INFO: Waiting up to 5m0s for pod "downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-9d9jr" to be "success or failure" Mar 17 11:14:02.293: INFO: Pod "downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 108.180019ms Mar 17 11:14:05.894: INFO: Pod "downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.709069632s Mar 17 11:14:07.898: INFO: Pod "downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.713087686s Mar 17 11:14:09.902: INFO: Pod "downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.716419414s STEP: Saw pod success Mar 17 11:14:09.902: INFO: Pod "downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:14:09.904: INFO: Trying to get logs from node kube pod downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009 container dapi-container: STEP: delete the pod Mar 17 11:14:10.071: INFO: Waiting for pod downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009 to disappear Mar 17 11:14:10.086: INFO: Pod downward-api-c4de2e38-48a5-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:14:10.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9d9jr" for this suite. Mar 17 11:14:16.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:14:16.177: INFO: namespace: e2e-tests-downward-api-9d9jr, resource: bindings, ignored listing per whitelist Mar 17 11:14:16.213: INFO: namespace e2e-tests-downward-api-9d9jr deletion completed in 6.124487808s • [SLOW TEST:14.232 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:14:16.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:14:55.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-t6g5v" for this suite. Mar 17 11:15:01.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:15:01.507: INFO: namespace: e2e-tests-container-runtime-t6g5v, resource: bindings, ignored listing per whitelist Mar 17 11:15:01.599: INFO: namespace e2e-tests-container-runtime-t6g5v deletion completed in 6.302303944s • [SLOW TEST:45.386 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:15:01.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Mar 17 11:15:01.810: INFO: Waiting up to 5m0s for pod "client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009" in namespace "e2e-tests-containers-nxdq7" to be "success or failure" Mar 17 11:15:01.876: INFO: Pod "client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 65.316096ms Mar 17 11:15:03.878: INFO: Pod "client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068091241s Mar 17 11:15:06.222: INFO: Pod "client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411939324s Mar 17 11:15:08.225: INFO: Pod "client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.414424473s STEP: Saw pod success Mar 17 11:15:08.225: INFO: Pod "client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:15:08.521: INFO: Trying to get logs from node kube pod client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009 container test-container: STEP: delete the pod Mar 17 11:15:09.331: INFO: Waiting for pod client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009 to disappear Mar 17 11:15:09.396: INFO: Pod client-containers-e870d2d0-48a5-11e9-bf64-0242ac110009 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:15:09.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nxdq7" for this suite. Mar 17 11:15:15.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:15:15.736: INFO: namespace: e2e-tests-containers-nxdq7, resource: bindings, ignored listing per whitelist Mar 17 11:15:15.745: INFO: namespace e2e-tests-containers-nxdq7 deletion completed in 6.100357734s • [SLOW TEST:14.145 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:15:15.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:15:16.010: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-p7jbz" to be "success or failure" Mar 17 11:15:16.164: INFO: Pod "downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 154.15111ms Mar 17 11:15:18.168: INFO: Pod "downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157561707s Mar 17 11:15:20.293: INFO: Pod "downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283279211s Mar 17 11:15:22.297: INFO: Pod "downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287208526s STEP: Saw pod success Mar 17 11:15:22.297: INFO: Pod "downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:15:22.301: INFO: Trying to get logs from node kube pod downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009 container client-container: STEP: delete the pod Mar 17 11:15:22.382: INFO: Waiting for pod downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009 to disappear Mar 17 11:15:23.155: INFO: Pod downwardapi-volume-f0e5eff1-48a5-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:15:23.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p7jbz" for this suite. Mar 17 11:15:31.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:15:31.743: INFO: namespace: e2e-tests-projected-p7jbz, resource: bindings, ignored listing per whitelist Mar 17 11:15:31.781: INFO: namespace e2e-tests-projected-p7jbz deletion completed in 8.607389637s • [SLOW TEST:16.036 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:15:31.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Mar 17 11:15:31.856: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-cc52h" to be "success or failure" Mar 17 11:15:31.881: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.382569ms Mar 17 11:15:33.958: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101418013s Mar 17 11:15:35.961: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104086898s Mar 17 11:15:37.965: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108176332s STEP: Saw pod success Mar 17 11:15:37.965: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 17 11:15:37.967: INFO: Trying to get logs from node kube pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 17 11:15:38.254: INFO: Waiting for pod pod-host-path-test to disappear Mar 17 11:15:38.268: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:15:38.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-cc52h" for this suite. Mar 17 11:15:46.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:15:46.368: INFO: namespace: e2e-tests-hostpath-cc52h, resource: bindings, ignored listing per whitelist Mar 17 11:15:46.382: INFO: namespace e2e-tests-hostpath-cc52h deletion completed in 8.110396308s • [SLOW TEST:14.601 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:15:46.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 17 11:15:47.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gfr5h' Mar 17 11:15:50.730: INFO: stderr: "" Mar 17 11:15:50.730: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 17 11:15:51.733: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:15:51.733: INFO: Found 0 / 1 Mar 17 11:15:52.733: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:15:52.733: INFO: Found 0 / 1 Mar 17 11:15:53.821: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:15:53.821: INFO: Found 0 / 1 Mar 17 11:15:54.734: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:15:54.734: INFO: Found 0 / 1 Mar 17 11:15:55.735: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:15:55.735: INFO: Found 1 / 1 Mar 17 11:15:55.735: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 17 11:15:55.739: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:15:55.739: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 17 11:15:55.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-254c5 --namespace=e2e-tests-kubectl-gfr5h -p {"metadata":{"annotations":{"x":"y"}}}' Mar 17 11:15:55.880: INFO: stderr: "" Mar 17 11:15:55.880: INFO: stdout: "pod/redis-master-254c5 patched\n" STEP: checking annotations Mar 17 11:15:55.898: INFO: Selector matched 1 pods for map[app:redis] Mar 17 11:15:55.898: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:15:55.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gfr5h" for this suite. Mar 17 11:16:18.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:16:18.132: INFO: namespace: e2e-tests-kubectl-gfr5h, resource: bindings, ignored listing per whitelist Mar 17 11:16:18.143: INFO: namespace e2e-tests-kubectl-gfr5h deletion completed in 22.241062881s • [SLOW TEST:31.760 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:16:18.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:16:46.355: INFO: Container started at 2019-03-17 11:16:23 +0000 UTC, pod became ready at 2019-03-17 11:16:46 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:16:46.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-v64vb" for this suite. Mar 17 11:17:08.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:17:08.408: INFO: namespace: e2e-tests-container-probe-v64vb, resource: bindings, ignored listing per whitelist Mar 17 11:17:08.487: INFO: namespace e2e-tests-container-probe-v64vb deletion completed in 22.12657342s • [SLOW TEST:50.344 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:17:08.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3407b1eb-48a6-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume configMaps Mar 17 11:17:08.645: INFO: Waiting up to 5m0s for pod "pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-9ld59" to be "success or failure" Mar 17 11:17:08.654: INFO: Pod "pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.67048ms Mar 17 11:17:10.657: INFO: Pod "pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011459145s Mar 17 11:17:12.660: INFO: Pod "pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014930903s Mar 17 11:17:14.664: INFO: Pod "pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018453538s STEP: Saw pod success Mar 17 11:17:14.664: INFO: Pod "pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:17:14.666: INFO: Trying to get logs from node kube pod pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009 container configmap-volume-test: STEP: delete the pod Mar 17 11:17:14.725: INFO: Waiting for pod pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009 to disappear Mar 17 11:17:14.750: INFO: Pod pod-configmaps-34081816-48a6-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:17:14.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9ld59" for this suite. Mar 17 11:17:20.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:17:20.961: INFO: namespace: e2e-tests-configmap-9ld59, resource: bindings, ignored listing per whitelist Mar 17 11:17:21.007: INFO: namespace e2e-tests-configmap-9ld59 deletion completed in 6.253709464s • [SLOW TEST:12.520 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:17:21.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Mar 17 11:17:21.383: INFO: Waiting up to 5m0s for pod "var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009" in namespace "e2e-tests-var-expansion-pwnr5" to be "success or failure" Mar 17 11:17:21.398: INFO: Pod "var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.52305ms Mar 17 11:17:23.402: INFO: Pod "var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018932359s Mar 17 11:17:25.404: INFO: Pod "var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021780428s Mar 17 11:17:29.483: INFO: Pod "var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100769063s STEP: Saw pod success Mar 17 11:17:29.483: INFO: Pod "var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:17:29.709: INFO: Trying to get logs from node kube pod var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009 container dapi-container: STEP: delete the pod Mar 17 11:17:29.742: INFO: Waiting for pod var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009 to disappear Mar 17 11:17:29.743: INFO: Pod var-expansion-3b8929ed-48a6-11e9-bf64-0242ac110009 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:17:29.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-pwnr5" for this suite. Mar 17 11:17:35.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:17:35.838: INFO: namespace: e2e-tests-var-expansion-pwnr5, resource: bindings, ignored listing per whitelist Mar 17 11:17:35.875: INFO: namespace e2e-tests-var-expansion-pwnr5 deletion completed in 6.129475749s • [SLOW TEST:14.868 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:17:35.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 17 11:17:36.056: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 17 11:17:36.082: INFO: Waiting for terminating namespaces to be deleted... Mar 17 11:17:36.083: INFO: Logging pods the kubelet thinks is on node kube before test Mar 17 11:17:36.088: INFO: kube-controller-manager-kube from kube-system started at (0 container statuses recorded) Mar 17 11:17:36.088: INFO: etcd-kube from kube-system started at (0 container statuses recorded) Mar 17 11:17:36.088: INFO: kube-apiserver-kube from kube-system started at (0 container statuses recorded) Mar 17 11:17:36.088: INFO: kube-proxy-6jlw8 from kube-system started at 2019-03-09 11:38:22 +0000 UTC (1 container statuses recorded) Mar 17 11:17:36.088: INFO: Container kube-proxy ready: true, restart count 0 Mar 17 11:17:36.088: INFO: kube-scheduler-kube from kube-system started at (0 container statuses recorded) Mar 17 11:17:36.088: INFO: weave-net-47d2b from kube-system started at 2019-03-09 11:38:24 +0000 UTC (2 container statuses recorded) Mar 17 11:17:36.088: INFO: Container weave ready: true, restart count 0 Mar 17 11:17:36.088: INFO: Container weave-npc ready: true, restart count 0 Mar 17 11:17:36.088: INFO: coredns-86c58d9df4-lrf5x from kube-system started at 2019-03-09 11:38:41 +0000 UTC (1 container statuses recorded) Mar 17 11:17:36.088: INFO: Container coredns ready: true, restart count 0 Mar 17 11:17:36.088: INFO: coredns-86c58d9df4-xv8sl from kube-system started at 2019-03-09 11:38:41 +0000 UTC (1 container statuses recorded) Mar 17 11:17:36.088: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.158cbae31d739230], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:17:37.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-2tvkb" for this suite. Mar 17 11:17:43.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:17:43.180: INFO: namespace: e2e-tests-sched-pred-2tvkb, resource: bindings, ignored listing per whitelist Mar 17 11:17:43.235: INFO: namespace e2e-tests-sched-pred-2tvkb deletion completed in 6.086910415s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.360 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:17:43.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-7ctgp [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-7ctgp STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-7ctgp Mar 17 11:17:43.408: INFO: Found 0 stateful pods, waiting for 1 Mar 17 11:17:53.415: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 17 11:17:53.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:17:53.750: INFO: stderr: "" Mar 17 11:17:53.750: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:17:53.750: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:17:53.753: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 17 11:18:03.756: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:18:03.756: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:18:03.901: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:18:03.901: INFO: ss-0 kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC }] Mar 17 11:18:03.901: INFO: Mar 17 11:18:03.901: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 17 11:18:05.975: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.865050387s Mar 17 11:18:10.498: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.790877647s Mar 17 11:18:11.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.268363709s Mar 17 11:18:12.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.2600996s Mar 17 11:18:14.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 236.826929ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-7ctgp Mar 17 11:18:15.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:16.079: INFO: stderr: "" Mar 17 11:18:16.079: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:18:16.079: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:18:16.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:16.265: INFO: rc: 1 Mar 17 11:18:16.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000bc7740 exit status 1 true [0xc00160e608 0xc00160e620 0xc00160e638] [0xc00160e608 0xc00160e620 0xc00160e638] [0xc00160e618 0xc00160e630] [0x92f7b0 0x92f7b0] 0xc001cf5980 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 17 11:18:26.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:26.647: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Mar 17 11:18:26.647: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:18:26.647: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:18:26.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 17 11:18:26.906: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Mar 17 11:18:26.906: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 17 11:18:26.906: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 17 11:18:26.910: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 17 11:18:26.910: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 17 11:18:26.910: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 17 11:18:26.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:18:27.143: INFO: stderr: "" Mar 17 11:18:27.143: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:18:27.143: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:18:27.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:18:27.605: INFO: stderr: "" Mar 17 11:18:27.605: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:18:27.606: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:18:27.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7ctgp ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 17 11:18:27.929: INFO: stderr: "" Mar 17 11:18:27.929: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 17 11:18:27.929: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 17 11:18:27.929: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:18:27.932: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 17 11:18:37.937: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:18:37.937: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:18:37.937: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 17 11:18:38.043: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:18:38.043: INFO: ss-0 kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC }] Mar 17 11:18:38.043: INFO: ss-1 kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:38.043: INFO: ss-2 kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:38.043: INFO: Mar 17 11:18:38.043: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:18:39.094: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:18:39.094: INFO: ss-0 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC }] Mar 17 11:18:39.094: INFO: ss-1 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:39.094: INFO: ss-2 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:39.094: INFO: Mar 17 11:18:39.094: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:18:40.098: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:18:40.098: INFO: ss-0 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC }] Mar 17 11:18:40.098: INFO: ss-1 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:40.098: INFO: ss-2 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:40.098: INFO: Mar 17 11:18:40.098: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:18:41.105: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:18:41.105: INFO: ss-0 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC }] Mar 17 11:18:41.105: INFO: ss-1 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:41.105: INFO: ss-2 kube Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:41.105: INFO: Mar 17 11:18:41.105: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:18:45.192: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:18:45.192: INFO: ss-0 kube Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:17:43 +0000 UTC }] Mar 17 11:18:45.192: INFO: ss-1 kube Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:45.192: INFO: ss-2 kube Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:18:03 +0000 UTC }] Mar 17 11:18:45.192: INFO: Mar 17 11:18:45.192: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 17 11:18:46.196: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.75601641s Mar 17 11:18:47.202: INFO: Verifying statefulset ss doesn't scale past 0 for another 751.772963ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-7ctgp Mar 17 11:18:48.205: INFO: Scaling statefulset ss to 0 Mar 17 11:18:48.212: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 17 11:18:48.214: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7ctgp Mar 17 11:18:48.216: INFO: Scaling statefulset ss to 0 Mar 17 11:18:48.222: INFO: Waiting for statefulset status.replicas updated to 0 Mar 17 11:18:48.224: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:18:48.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-7ctgp" for this suite. Mar 17 11:18:54.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:18:54.354: INFO: namespace: e2e-tests-statefulset-7ctgp, resource: bindings, ignored listing per whitelist Mar 17 11:18:54.375: INFO: namespace e2e-tests-statefulset-7ctgp deletion completed in 6.125666225s • [SLOW TEST:71.140 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:18:54.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Mar 17 11:18:54.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 17 11:18:54.780: INFO: stderr: "" Mar 17 11:18:54.780: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.15:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:18:54.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rdwzm" for this suite. Mar 17 11:19:00.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:19:00.842: INFO: namespace: e2e-tests-kubectl-rdwzm, resource: bindings, ignored listing per whitelist Mar 17 11:19:00.871: INFO: namespace e2e-tests-kubectl-rdwzm deletion completed in 6.088450282s • [SLOW TEST:6.496 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:19:00.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7707b681-48a6-11e9-bf64-0242ac110009 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7707b681-48a6-11e9-bf64-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:20:24.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w4j7f" for this suite. Mar 17 11:20:56.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:20:56.664: INFO: namespace: e2e-tests-projected-w4j7f, resource: bindings, ignored listing per whitelist Mar 17 11:20:56.722: INFO: namespace e2e-tests-projected-w4j7f deletion completed in 32.125367242s • [SLOW TEST:115.851 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:20:56.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-bc4eaac1-48a6-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume configMaps Mar 17 11:20:57.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-rdk6h" to be "success or failure" Mar 17 11:20:57.578: INFO: Pod "pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 300.997551ms Mar 17 11:20:59.581: INFO: Pod "pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30434478s Mar 17 11:21:01.585: INFO: Pod "pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30810298s Mar 17 11:21:03.588: INFO: Pod "pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.311311979s STEP: Saw pod success Mar 17 11:21:03.588: INFO: Pod "pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:21:03.591: INFO: Trying to get logs from node kube pod pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009 container configmap-volume-test: STEP: delete the pod Mar 17 11:21:05.690: INFO: Waiting for pod pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009 to disappear Mar 17 11:21:05.706: INFO: Pod pod-configmaps-bc5046dd-48a6-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:21:05.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rdk6h" for this suite. Mar 17 11:21:11.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:21:11.796: INFO: namespace: e2e-tests-configmap-rdk6h, resource: bindings, ignored listing per whitelist Mar 17 11:21:11.881: INFO: namespace e2e-tests-configmap-rdk6h deletion completed in 6.172523951s • [SLOW TEST:15.159 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:21:11.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:21:12.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-8f5xf" to be "success or failure" Mar 17 11:21:12.904: INFO: Pod "downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 458.558806ms Mar 17 11:21:14.907: INFO: Pod "downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461650175s Mar 17 11:21:16.910: INFO: Pod "downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464601488s Mar 17 11:21:18.964: INFO: Pod "downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518227624s Mar 17 11:21:21.316: INFO: Pod "downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.870579466s STEP: Saw pod success Mar 17 11:21:21.316: INFO: Pod "downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:21:21.318: INFO: Trying to get logs from node kube pod downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009 container client-container: STEP: delete the pod Mar 17 11:21:22.008: INFO: Waiting for pod downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009 to disappear Mar 17 11:21:22.317: INFO: Pod downwardapi-volume-c525071c-48a6-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:21:22.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8f5xf" for this suite. Mar 17 11:21:28.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:21:29.292: INFO: namespace: e2e-tests-downward-api-8f5xf, resource: bindings, ignored listing per whitelist Mar 17 11:21:29.320: INFO: namespace e2e-tests-downward-api-8f5xf deletion completed in 7.000496253s • [SLOW TEST:17.439 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:21:29.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:21:29.493: INFO: Creating deployment "nginx-deployment" Mar 17 11:21:29.895: INFO: Waiting for observed generation 1 Mar 17 11:21:32.414: INFO: Waiting for all required pods to come up Mar 17 11:21:33.938: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 17 11:22:22.294: INFO: Waiting for deployment "nginx-deployment" to complete Mar 17 11:22:22.306: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 17 11:22:22.319: INFO: Updating deployment nginx-deployment Mar 17 11:22:22.319: INFO: Waiting for observed generation 2 Mar 17 11:22:28.010: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 17 11:22:28.635: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 17 11:22:29.789: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 17 11:22:30.324: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 17 11:22:30.324: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 17 11:22:31.216: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 17 11:22:31.579: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 17 11:22:31.579: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 17 11:22:31.602: INFO: Updating deployment nginx-deployment Mar 17 11:22:31.602: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 17 11:22:32.703: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 17 11:22:36.097: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 17 11:22:39.119: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-knrzd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-knrzd/deployments/nginx-deployment,UID:cf872eb0-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288512,Generation:3,CreationTimestamp:2019-03-17 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-03-17 11:22:28 +0000 UTC 2019-03-17 11:21:29 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-65bbdb5f8" is progressing.} {Available False 2019-03-17 11:22:33 +0000 UTC 2019-03-17 11:22:33 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 17 11:22:40.390: INFO: New ReplicaSet "nginx-deployment-65bbdb5f8" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8,GenerateName:,Namespace:e2e-tests-deployment-knrzd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-knrzd/replicasets/nginx-deployment-65bbdb5f8,UID:ef0334d0-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288565,Generation:3,CreationTimestamp:2019-03-17 11:22:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cf872eb0-48a6-11e9-a072-fa163e921bae 0xc002099cc7 0xc002099cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 17 11:22:40.390: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 17 11:22:40.391: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965,GenerateName:,Namespace:e2e-tests-deployment-knrzd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-knrzd/replicasets/nginx-deployment-555b55d965,UID:cfc50f66-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288564,Generation:3,CreationTimestamp:2019-03-17 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cf872eb0-48a6-11e9-a072-fa163e921bae 0xc002099c07 0xc002099c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 17 11:22:41.093: INFO: Pod "nginx-deployment-555b55d965-2bh66" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-2bh66,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-2bh66,UID:cfd14250-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288379,Generation:0,CreationTimestamp:2019-03-17 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002452587 0xc002452588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002452600} {node.kubernetes.io/unreachable Exists NoExecute 0xc002452620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.6,StartTime:2019-03-17 11:21:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:21:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://3ce1f9089808940f3ea98e91b67c63e31450a8309716f1bd54e3929121b59ca9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.093: INFO: Pod "nginx-deployment-555b55d965-45xb4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-45xb4,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-45xb4,UID:f5e9bd68-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288569,Generation:0,CreationTimestamp:2019-03-17 11:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc0024526e7 0xc0024526e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002452760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002452780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:33 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:,StartTime:2019-03-17 11:22:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.094: INFO: Pod "nginx-deployment-555b55d965-5wt5f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-5wt5f,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-5wt5f,UID:f6a0540e-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288553,Generation:0,CreationTimestamp:2019-03-17 11:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002452837 0xc002452838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024528b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024528d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.094: INFO: Pod "nginx-deployment-555b55d965-6lvpw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-6lvpw,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-6lvpw,UID:f64da9d3-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288538,Generation:0,CreationTimestamp:2019-03-17 11:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002452947 0xc002452948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024529c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024529e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.094: INFO: Pod "nginx-deployment-555b55d965-786tl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-786tl,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-786tl,UID:d0033812-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288394,Generation:0,CreationTimestamp:2019-03-17 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002452a57 0xc002452a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002452ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002452af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.5,StartTime:2019-03-17 11:21:31 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:21:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://ce41e9f7fb1e0b9ff536f08d7a129bcef7b734648dc6dbfd9c133ab1b5922ac4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.094: INFO: Pod "nginx-deployment-555b55d965-8r5hp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-8r5hp,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-8r5hp,UID:cfd14c05-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288422,Generation:0,CreationTimestamp:2019-03-17 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002452bb7 0xc002452bb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002452c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002452c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.9,StartTime:2019-03-17 11:21:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:22:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://f33e6b96d2a7469e298c096c73da6cfcfb2b23081654983f324b28dfbc5b226a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.094: INFO: Pod "nginx-deployment-555b55d965-8xnd2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-8xnd2,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-8xnd2,UID:f64ddb12-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288543,Generation:0,CreationTimestamp:2019-03-17 11:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002452d17 0xc002452d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002452d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002452db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.094: INFO: Pod "nginx-deployment-555b55d965-96dfq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-96dfq,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-96dfq,UID:f6a0449e-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288552,Generation:0,CreationTimestamp:2019-03-17 11:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002452e27 0xc002452e28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002452ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002452ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.094: INFO: Pod "nginx-deployment-555b55d965-bbmrj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-bbmrj,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-bbmrj,UID:f6a05e4d-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288555,Generation:0,CreationTimestamp:2019-03-17 11:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002452f37 0xc002452f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002452fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002452fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.094: INFO: Pod "nginx-deployment-555b55d965-bbnrb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-bbnrb,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-bbnrb,UID:d00343c0-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288434,Generation:0,CreationTimestamp:2019-03-17 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002453047 0xc002453048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024530c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024530e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.13,StartTime:2019-03-17 11:21:31 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:22:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://ee00d4a25117f7c9816e70d5ee7b9128e34faa44fc62c8d90ed0028cb32fabdc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.095: INFO: Pod "nginx-deployment-555b55d965-dp7c8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-dp7c8,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-dp7c8,UID:f5f77208-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288523,Generation:0,CreationTimestamp:2019-03-17 11:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc0024531a7 0xc0024531a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453220} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.095: INFO: Pod "nginx-deployment-555b55d965-hwb4x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-hwb4x,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-hwb4x,UID:f64dc00d-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288545,Generation:0,CreationTimestamp:2019-03-17 11:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc0024532b7 0xc0024532b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.095: INFO: Pod "nginx-deployment-555b55d965-kkmbn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-kkmbn,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-kkmbn,UID:f6a019b5-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288551,Generation:0,CreationTimestamp:2019-03-17 11:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc0024533c7 0xc0024533c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453440} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.095: INFO: Pod "nginx-deployment-555b55d965-lbmt9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-lbmt9,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-lbmt9,UID:d00d9cb1-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288419,Generation:0,CreationTimestamp:2019-03-17 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc0024534d7 0xc0024534d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.12,StartTime:2019-03-17 11:21:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:22:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://bac51412e6dd7cd4f484e9fba5e24cd56a0af1e5c0effe5487deb45cf09af632}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.095: INFO: Pod "nginx-deployment-555b55d965-m457z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-m457z,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-m457z,UID:f6a06aa3-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288556,Generation:0,CreationTimestamp:2019-03-17 11:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002453637 0xc002453638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024536b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024536d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.095: INFO: Pod "nginx-deployment-555b55d965-nljwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-nljwz,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-nljwz,UID:f5f77816-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288525,Generation:0,CreationTimestamp:2019-03-17 11:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002453747 0xc002453748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024537c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024537e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.095: INFO: Pod "nginx-deployment-555b55d965-ph8zn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-ph8zn,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-ph8zn,UID:d00d77b3-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288428,Generation:0,CreationTimestamp:2019-03-17 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002453857 0xc002453858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024538d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024538f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.11,StartTime:2019-03-17 11:21:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:22:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://4a357bf0fbb2a9a0970632c8e69cb9fd2a22ce448822f540579754a0e030607e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.096: INFO: Pod "nginx-deployment-555b55d965-qj58r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-qj58r,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-qj58r,UID:d00d96f4-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288436,Generation:0,CreationTimestamp:2019-03-17 11:21:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc0024539b7 0xc0024539b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.8,StartTime:2019-03-17 11:21:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:22:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://efe7be3881291c9b30813bbf1ac7e37859fac312673dd1e7e76105a5b26d3557}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.096: INFO: Pod "nginx-deployment-555b55d965-t4knf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-t4knf,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-t4knf,UID:f64dc3f0-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288548,Generation:0,CreationTimestamp:2019-03-17 11:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002453b17 0xc002453b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.096: INFO: Pod "nginx-deployment-555b55d965-z7bwt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-z7bwt,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-555b55d965-z7bwt,UID:cfc9f478-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288412,Generation:0,CreationTimestamp:2019-03-17 11:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 cfc50f66-48a6-11e9-a072-fa163e921bae 0xc002453c27 0xc002453c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:21:29 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.4,StartTime:2019-03-17 11:21:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:21:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://202e421ce1711a7848612cd6625bb7f6a5e1d476199aee675eaa1fcbea431b6c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.096: INFO: Pod "nginx-deployment-65bbdb5f8-2sh7z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-2sh7z,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-2sh7z,UID:f64e16d2-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288549,Generation:0,CreationTimestamp:2019-03-17 11:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc002453d87 0xc002453d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.096: INFO: Pod "nginx-deployment-65bbdb5f8-7jpcb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-7jpcb,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-7jpcb,UID:ef8a65bc-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288494,Generation:0,CreationTimestamp:2019-03-17 11:22:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc002453e97 0xc002453e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002453f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002453f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:23 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:,StartTime:2019-03-17 11:22:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.096: INFO: Pod "nginx-deployment-65bbdb5f8-7rx58" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-7rx58,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-7rx58,UID:ef8a9142-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288480,Generation:0,CreationTimestamp:2019-03-17 11:22:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc002453ff7 0xc002453ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f0070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f0090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:23 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:,StartTime:2019-03-17 11:22:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.097: INFO: Pod "nginx-deployment-65bbdb5f8-b4xnq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-b4xnq,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-b4xnq,UID:f64e28ab-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288542,Generation:0,CreationTimestamp:2019-03-17 11:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f0157 0xc0020f0158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f01d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f01f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.097: INFO: Pod "nginx-deployment-65bbdb5f8-b6tpq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-b6tpq,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-b6tpq,UID:f10996f5-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288503,Generation:0,CreationTimestamp:2019-03-17 11:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f0267 0xc0020f0268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f02e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f0300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:26 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:,StartTime:2019-03-17 11:22:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.097: INFO: Pod "nginx-deployment-65bbdb5f8-glnmz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-glnmz,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-glnmz,UID:f5e936e8-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288554,Generation:0,CreationTimestamp:2019-03-17 11:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f03c7 0xc0020f03c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f0440} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f0460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:33 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:,StartTime:2019-03-17 11:22:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.097: INFO: Pod "nginx-deployment-65bbdb5f8-lw7jd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-lw7jd,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-lw7jd,UID:f5f67d53-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288524,Generation:0,CreationTimestamp:2019-03-17 11:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f0527 0xc0020f0528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f05a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f05c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.098: INFO: Pod "nginx-deployment-65bbdb5f8-p7ndx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-p7ndx,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-p7ndx,UID:f64e1b51-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288547,Generation:0,CreationTimestamp:2019-03-17 11:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f0637 0xc0020f0638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f06b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f06d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.098: INFO: Pod "nginx-deployment-65bbdb5f8-pxg7h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-pxg7h,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-pxg7h,UID:ef888cf1-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288464,Generation:0,CreationTimestamp:2019-03-17 11:22:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f0747 0xc0020f0748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f07c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f07e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:23 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:,StartTime:2019-03-17 11:22:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.098: INFO: Pod "nginx-deployment-65bbdb5f8-tjztt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-tjztt,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-tjztt,UID:f6a13f53-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288557,Generation:0,CreationTimestamp:2019-03-17 11:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f08a7 0xc0020f08a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f0920} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f0940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.098: INFO: Pod "nginx-deployment-65bbdb5f8-tpmhg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-tpmhg,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-tpmhg,UID:f64e21b5-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288550,Generation:0,CreationTimestamp:2019-03-17 11:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f09b7 0xc0020f09b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f0a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f0a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.099: INFO: Pod "nginx-deployment-65bbdb5f8-w94kg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-w94kg,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-w94kg,UID:f0abde78-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288501,Generation:0,CreationTimestamp:2019-03-17 11:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f0ac7 0xc0020f0ac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f0b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f0b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:25 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.7,PodIP:,StartTime:2019-03-17 11:22:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 17 11:22:41.099: INFO: Pod "nginx-deployment-65bbdb5f8-wv2hj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-wv2hj,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-knrzd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-knrzd/pods/nginx-deployment-65bbdb5f8-wv2hj,UID:f5f6894f-48a6-11e9-a072-fa163e921bae,ResourceVersion:1288522,Generation:0,CreationTimestamp:2019-03-17 11:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 ef0334d0-48a6-11e9-a072-fa163e921bae 0xc0020f0c27 0xc0020f0c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6vp2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6vp2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6vp2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020f0ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020f0cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:22:41.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-knrzd" for this suite. Mar 17 11:23:56.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:23:56.804: INFO: namespace: e2e-tests-deployment-knrzd, resource: bindings, ignored listing per whitelist Mar 17 11:23:56.808: INFO: namespace e2e-tests-deployment-knrzd deletion completed in 1m14.578969181s • [SLOW TEST:147.487 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:23:56.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-29ae65b2-48a7-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume secrets Mar 17 11:24:02.457: INFO: Waiting up to 5m0s for pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-22sjc" to be "success or failure" Mar 17 11:24:02.882: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 424.942182ms Mar 17 11:24:05.079: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621877757s Mar 17 11:24:07.081: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.624246524s Mar 17 11:24:09.097: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.639952746s Mar 17 11:24:11.104: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646456603s Mar 17 11:24:13.108: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650419373s Mar 17 11:24:15.153: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 12.69532742s Mar 17 11:24:17.156: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.698417134s Mar 17 11:24:19.159: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.701345567s Mar 17 11:24:21.161: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 18.704193631s Mar 17 11:24:23.164: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 20.706767861s Mar 17 11:24:25.167: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 22.70963369s Mar 17 11:24:27.351: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 24.893814742s Mar 17 11:24:29.354: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 26.896895073s Mar 17 11:24:31.357: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 28.899584874s Mar 17 11:24:33.360: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 30.902485082s Mar 17 11:24:35.658: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.200785098s STEP: Saw pod success Mar 17 11:24:35.658: INFO: Pod "pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:24:35.717: INFO: Trying to get logs from node kube pod pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009 container secret-volume-test: STEP: delete the pod Mar 17 11:24:35.955: INFO: Waiting for pod pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009 to disappear Mar 17 11:24:35.959: INFO: Pod pod-secrets-2a711924-48a7-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:24:35.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-22sjc" for this suite. Mar 17 11:24:42.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:24:42.295: INFO: namespace: e2e-tests-secrets-22sjc, resource: bindings, ignored listing per whitelist Mar 17 11:24:42.304: INFO: namespace e2e-tests-secrets-22sjc deletion completed in 6.339717438s STEP: Destroying namespace "e2e-tests-secret-namespace-klqgl" for this suite. Mar 17 11:24:48.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:24:48.399: INFO: namespace: e2e-tests-secret-namespace-klqgl, resource: bindings, ignored listing per whitelist Mar 17 11:24:48.399: INFO: namespace e2e-tests-secret-namespace-klqgl deletion completed in 6.094814131s • [SLOW TEST:51.591 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:24:48.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:24:48.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-qwqnw" to be "success or failure" Mar 17 11:24:48.761: INFO: Pod "downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 24.135865ms Mar 17 11:24:50.764: INFO: Pod "downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02708779s Mar 17 11:24:52.872: INFO: Pod "downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135031587s Mar 17 11:24:54.930: INFO: Pod "downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.193320611s STEP: Saw pod success Mar 17 11:24:54.930: INFO: Pod "downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:24:54.932: INFO: Trying to get logs from node kube pod downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009 container client-container: STEP: delete the pod Mar 17 11:24:55.006: INFO: Waiting for pod downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009 to disappear Mar 17 11:24:55.792: INFO: Pod downwardapi-volume-46473b7e-48a7-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:24:55.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qwqnw" for this suite. Mar 17 11:25:01.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:25:01.904: INFO: namespace: e2e-tests-downward-api-qwqnw, resource: bindings, ignored listing per whitelist Mar 17 11:25:01.930: INFO: namespace e2e-tests-downward-api-qwqnw deletion completed in 6.132370498s • [SLOW TEST:13.531 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:25:01.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-c75c9/secret-test-4e706e61-48a7-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume secrets Mar 17 11:25:02.447: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-c75c9" to be "success or failure" Mar 17 11:25:02.455: INFO: Pod "pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 7.639525ms Mar 17 11:25:04.883: INFO: Pod "pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435347213s Mar 17 11:25:07.005: INFO: Pod "pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.557884799s Mar 17 11:25:09.008: INFO: Pod "pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.560906334s Mar 17 11:25:12.254: INFO: Pod "pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.806664211s STEP: Saw pod success Mar 17 11:25:12.254: INFO: Pod "pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:25:12.258: INFO: Trying to get logs from node kube pod pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009 container env-test: STEP: delete the pod Mar 17 11:25:13.397: INFO: Waiting for pod pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009 to disappear Mar 17 11:25:13.402: INFO: Pod pod-configmaps-4e721381-48a7-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:25:13.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-c75c9" for this suite. Mar 17 11:25:19.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:25:19.896: INFO: namespace: e2e-tests-secrets-c75c9, resource: bindings, ignored listing per whitelist Mar 17 11:25:19.972: INFO: namespace e2e-tests-secrets-c75c9 deletion completed in 6.564944421s • [SLOW TEST:18.042 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:25:19.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 17 11:25:20.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-r8zr9' Mar 17 11:25:20.605: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 17 11:25:20.605: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 17 11:25:20.779: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-7blhp] Mar 17 11:25:20.779: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-7blhp" in namespace "e2e-tests-kubectl-r8zr9" to be "running and ready" Mar 17 11:25:20.781: INFO: Pod "e2e-test-nginx-rc-7blhp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183366ms Mar 17 11:25:22.785: INFO: Pod "e2e-test-nginx-rc-7blhp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005421494s Mar 17 11:25:24.861: INFO: Pod "e2e-test-nginx-rc-7blhp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081214374s Mar 17 11:25:27.121: INFO: Pod "e2e-test-nginx-rc-7blhp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341317673s Mar 17 11:25:29.160: INFO: Pod "e2e-test-nginx-rc-7blhp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.380997188s Mar 17 11:25:31.251: INFO: Pod "e2e-test-nginx-rc-7blhp": Phase="Running", Reason="", readiness=true. Elapsed: 10.471222104s Mar 17 11:25:31.251: INFO: Pod "e2e-test-nginx-rc-7blhp" satisfied condition "running and ready" Mar 17 11:25:31.251: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-7blhp] Mar 17 11:25:31.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-r8zr9' Mar 17 11:25:31.395: INFO: stderr: "" Mar 17 11:25:31.395: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Mar 17 11:25:31.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-r8zr9' Mar 17 11:25:31.506: INFO: stderr: "" Mar 17 11:25:31.506: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:25:31.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r8zr9" for this suite. Mar 17 11:25:57.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:25:57.934: INFO: namespace: e2e-tests-kubectl-r8zr9, resource: bindings, ignored listing per whitelist Mar 17 11:25:58.013: INFO: namespace e2e-tests-kubectl-r8zr9 deletion completed in 26.501593982s • [SLOW TEST:38.041 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:25:58.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6fc75c9b-48a7-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume configMaps Mar 17 11:25:58.376: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-lqfxf" to be "success or failure" Mar 17 11:25:58.724: INFO: Pod "pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 348.112512ms Mar 17 11:26:00.728: INFO: Pod "pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351895591s Mar 17 11:26:02.732: INFO: Pod "pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356134159s Mar 17 11:26:04.961: INFO: Pod "pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585457431s Mar 17 11:26:06.965: INFO: Pod "pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588669652s Mar 17 11:26:08.968: INFO: Pod "pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.591682238s STEP: Saw pod success Mar 17 11:26:08.968: INFO: Pod "pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:26:08.970: INFO: Trying to get logs from node kube pod pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Mar 17 11:26:09.345: INFO: Waiting for pod pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009 to disappear Mar 17 11:26:09.410: INFO: Pod pod-projected-configmaps-6fc7e55a-48a7-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:26:09.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lqfxf" for this suite. Mar 17 11:26:15.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:26:15.696: INFO: namespace: e2e-tests-projected-lqfxf, resource: bindings, ignored listing per whitelist Mar 17 11:26:15.750: INFO: namespace e2e-tests-projected-lqfxf deletion completed in 6.335101019s • [SLOW TEST:17.736 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:26:15.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Mar 17 11:26:28.131: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. Mar 17 11:27:58.185: INFO: Unexpected error occurred: timed out waiting for the condition [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 STEP: Collecting events from namespace "e2e-tests-namespaces-m97n9". STEP: Found 0 events. Mar 17 11:27:58.196: INFO: POD NODE PHASE GRACE CONDITIONS Mar 17 11:27:58.196: INFO: test-pod-uninitialized kube Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:26:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:27:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:27:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:26:28 +0000 UTC }] Mar 17 11:27:58.196: INFO: coredns-86c58d9df4-lrf5x kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:41 +0000 UTC }] Mar 17 11:27:58.196: INFO: coredns-86c58d9df4-xv8sl kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:41 +0000 UTC }] Mar 17 11:27:58.196: INFO: etcd-kube kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:37:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:37:58 +0000 UTC }] Mar 17 11:27:58.196: INFO: kube-apiserver-kube kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:37:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:37:58 +0000 UTC }] Mar 17 11:27:58.196: INFO: kube-controller-manager-kube kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:37:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-14 00:07:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-14 00:07:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:37:58 +0000 UTC }] Mar 17 11:27:58.196: INFO: kube-proxy-6jlw8 kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:22 +0000 UTC }] Mar 17 11:27:58.196: INFO: kube-scheduler-kube kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:37:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-14 00:07:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-14 00:07:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:37:58 +0000 UTC }] Mar 17 11:27:58.196: INFO: weave-net-47d2b kube Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:23:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:23:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-09 11:38:24 +0000 UTC }] Mar 17 11:27:58.196: INFO: Mar 17 11:27:58.198: INFO: Logging node info for node kube Mar 17 11:27:58.200: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/kube,UID:d1203fc1-425f-11e9-a072-fa163e921bae,ResourceVersion:1289270,Generation:0,CreationTimestamp:2019-03-09 11:38:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: kube,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20749852672 0} {} BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4143030272 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18674867374 0} {} 18674867374 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4038172672 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-03-09 11:38:35 +0000 UTC 2019-03-09 11:38:35 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-03-17 11:27:51 +0000 UTC 2019-03-09 11:38:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-03-17 11:27:51 +0000 UTC 2019-03-09 11:38:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-03-17 11:27:51 +0000 UTC 2019-03-09 11:38:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-03-17 11:27:51 +0000 UTC 2019-03-09 11:38:41 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.100.7} {Hostname kube}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9d25d7ed8378435ca43765c7c2778443,SystemUUID:9D25D7ED-8378-435C-A437-65C7C2778443,BootID:9c464167-55c7-41b6-850f-9cb6e463b07d,KernelVersion:4.4.0-142-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://18.6.1,KubeletVersion:v1.13.4,KubeProxyVersion:v1.13.4,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:9f3c39082ceea4979edabf1c981a54c0b3e32da5243c242beffa556241e81ae5 k8s.gcr.io/kube-apiserver:v1.13.4] 180983922} {[weaveworks/weave-kube@sha256:103e37b504631f7c762ef6baa79b3e7d2d2cf718accbf659e7cae933b6ca937c weaveworks/weave-kube:2.5.1] 148146468} {[k8s.gcr.io/kube-controller-manager@sha256:84477c0a8d0f8db87f12856d7b97c2784856caf3bc46bc9dd73f5ac219bc9d06 k8s.gcr.io/kube-controller-manager:v1.13.4] 146244370} {[nginx@sha256:98efe605f61725fd817ea69521b0eeb32bef007af0e3d0aeb6258c6e6fe7fc1a nginx:latest] 109252443} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:e57fd7593e2bdc161e41b8922e5fba6dbdab790608fc8671721eb26fbabd3090 k8s.gcr.io/kube-proxy:v1.13.4] 80254896} {[k8s.gcr.io/kube-scheduler@sha256:e7b2f9b1dcfa03b0e43b891979075d62086fe14e169de081f9c23db378f5b2f7 k8s.gcr.io/kube-scheduler:v1.13.4] 79623282} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[weaveworks/weave-npc@sha256:e51029aa3abcada82c59b5e5161b71dc14c04ca664b53bb1cdd2ebbf850bf258 weaveworks/weave-npc:2.5.1] 49569458} {[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1] 42323657} {[gcr.io/google-containers/debian-base@sha256:b70f7099dbcb5b306c6d97285701e0191d851061bce24d5c28f32cf303318583 gcr.io/google-containers/debian-base:0.4.0] 42322802} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[quay.io/coreos/etcd@sha256:cb9cee3d9d49050e7682fde0a9b26d6948a0117b1b4367b8170fcaa3960a57b8 quay.io/coreos/etcd:v3.3.10] 39468433} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:55390addbb1a2b82e6ffabafd72e0f5dfbc8f86c2e7d9f41fb914cca537bd500 nginx:1.15-alpine] 16083956} {[nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d nginx:1.14-alpine] 16032773} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7] 4206620} {[alpine@sha256:fea30b82fd63049b797ab37f13bf9772b59c15a36b1eec6b031b6e483fd7f252 alpine:3.7] 4206494} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Mar 17 11:27:58.201: INFO: Logging kubelet events for node kube Mar 17 11:27:58.204: INFO: Logging pods the kubelet thinks is on node kube Mar 17 11:27:58.214: INFO: kube-apiserver-kube started at (0+0 container statuses recorded) Mar 17 11:27:58.214: INFO: kube-proxy-6jlw8 started at 2019-03-09 11:38:22 +0000 UTC (0+1 container statuses recorded) Mar 17 11:27:58.214: INFO: Container kube-proxy ready: true, restart count 0 Mar 17 11:27:58.214: INFO: etcd-kube started at (0+0 container statuses recorded) Mar 17 11:27:58.214: INFO: kube-scheduler-kube started at (0+0 container statuses recorded) Mar 17 11:27:58.214: INFO: test-pod-uninitialized started at 2019-03-17 11:26:28 +0000 UTC (0+1 container statuses recorded) Mar 17 11:27:58.214: INFO: Container nginx ready: false, restart count 0 Mar 17 11:27:58.214: INFO: weave-net-47d2b started at 2019-03-09 11:38:24 +0000 UTC (0+2 container statuses recorded) Mar 17 11:27:58.214: INFO: Container weave ready: true, restart count 0 Mar 17 11:27:58.214: INFO: Container weave-npc ready: true, restart count 0 Mar 17 11:27:58.214: INFO: coredns-86c58d9df4-xv8sl started at 2019-03-09 11:38:41 +0000 UTC (0+1 container statuses recorded) Mar 17 11:27:58.215: INFO: Container coredns ready: true, restart count 0 Mar 17 11:27:58.215: INFO: coredns-86c58d9df4-lrf5x started at 2019-03-09 11:38:41 +0000 UTC (0+1 container statuses recorded) Mar 17 11:27:58.215: INFO: Container coredns ready: true, restart count 0 Mar 17 11:27:58.215: INFO: kube-controller-manager-kube started at (0+0 container statuses recorded) W0317 11:27:58.219720 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 17 11:27:58.723: INFO: Latency metrics for node kube Mar 17 11:27:58.723: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m37.335949s} Mar 17 11:27:58.723: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:1m34.338863s} Mar 17 11:27:58.723: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:1m24.937618s} Mar 17 11:27:58.723: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:1m21.21897s} Mar 17 11:27:58.723: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:1m18.640774s} Mar 17 11:27:58.723: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:50.536916s} Mar 17 11:27:58.723: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:49.90169s} Mar 17 11:27:58.723: INFO: {Operation:inspect_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:36.409688s} Mar 17 11:27:58.723: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:34.797653s} Mar 17 11:27:58.723: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.99 Latency:17.384509s} Mar 17 11:27:58.723: INFO: {Operation: Method:pleg_relist_latency_microseconds Quantile:0.99 Latency:15.679744s} Mar 17 11:27:58.723: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.9 Latency:10.485146s} Mar 17 11:27:58.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-m97n9" for this suite. Mar 17 11:28:04.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:28:04.929: INFO: namespace: e2e-tests-namespaces-m97n9, resource: bindings, ignored listing per whitelist Mar 17 11:28:05.011: INFO: namespace e2e-tests-namespaces-m97n9 deletion completed in 6.284735363s STEP: Destroying namespace "e2e-tests-nsdeletetest-v8nn9" for this suite. Mar 17 11:28:05.018: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-v8nn9": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-v8nn9": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-v8nn9\": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc00175d980), Code:409}}) • Failure [109.269 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Expected error: <*errors.errorString | 0xc00009f870>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:28:05.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 17 11:28:05.357: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zqj6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zqj6z/configmaps/e2e-watch-test-label-changed,UID:bb70baee-48a7-11e9-a072-fa163e921bae,ResourceVersion:1289297,Generation:0,CreationTimestamp:2019-03-17 11:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 17 11:28:05.357: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zqj6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zqj6z/configmaps/e2e-watch-test-label-changed,UID:bb70baee-48a7-11e9-a072-fa163e921bae,ResourceVersion:1289298,Generation:0,CreationTimestamp:2019-03-17 11:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 17 11:28:05.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zqj6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zqj6z/configmaps/e2e-watch-test-label-changed,UID:bb70baee-48a7-11e9-a072-fa163e921bae,ResourceVersion:1289299,Generation:0,CreationTimestamp:2019-03-17 11:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 17 11:28:15.659: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zqj6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zqj6z/configmaps/e2e-watch-test-label-changed,UID:bb70baee-48a7-11e9-a072-fa163e921bae,ResourceVersion:1289313,Generation:0,CreationTimestamp:2019-03-17 11:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 17 11:28:15.659: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zqj6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zqj6z/configmaps/e2e-watch-test-label-changed,UID:bb70baee-48a7-11e9-a072-fa163e921bae,ResourceVersion:1289314,Generation:0,CreationTimestamp:2019-03-17 11:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 17 11:28:15.659: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zqj6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zqj6z/configmaps/e2e-watch-test-label-changed,UID:bb70baee-48a7-11e9-a072-fa163e921bae,ResourceVersion:1289316,Generation:0,CreationTimestamp:2019-03-17 11:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:28:15.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-zqj6z" for this suite. Mar 17 11:28:21.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:28:21.717: INFO: namespace: e2e-tests-watch-zqj6z, resource: bindings, ignored listing per whitelist Mar 17 11:28:21.985: INFO: namespace e2e-tests-watch-zqj6z deletion completed in 6.307274859s • [SLOW TEST:16.966 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:28:21.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 17 11:28:22.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-snkmr" to be "success or failure" Mar 17 11:28:22.743: INFO: Pod "downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 404.964398ms Mar 17 11:28:24.746: INFO: Pod "downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407931399s Mar 17 11:28:26.750: INFO: Pod "downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411361678s Mar 17 11:28:28.755: INFO: Pod "downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.416255952s STEP: Saw pod success Mar 17 11:28:28.755: INFO: Pod "downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:28:28.757: INFO: Trying to get logs from node kube pod downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009 container client-container: STEP: delete the pod Mar 17 11:28:29.073: INFO: Waiting for pod downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009 to disappear Mar 17 11:28:29.557: INFO: Pod downwardapi-volume-c58be2fd-48a7-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:28:29.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-snkmr" for this suite. Mar 17 11:28:35.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:28:35.723: INFO: namespace: e2e-tests-projected-snkmr, resource: bindings, ignored listing per whitelist Mar 17 11:28:35.772: INFO: namespace e2e-tests-projected-snkmr deletion completed in 6.206117314s • [SLOW TEST:13.787 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:28:35.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-cdd51f5d-48a7-11e9-bf64-0242ac110009 STEP: Creating a pod to test consume secrets Mar 17 11:28:36.178: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-dz5dc" to be "success or failure" Mar 17 11:28:36.192: INFO: Pod "pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.308072ms Mar 17 11:28:38.247: INFO: Pod "pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069108733s Mar 17 11:28:40.250: INFO: Pod "pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072391751s Mar 17 11:28:43.380: INFO: Pod "pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 7.201858859s Mar 17 11:28:45.401: INFO: Pod "pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.223565185s STEP: Saw pod success Mar 17 11:28:45.401: INFO: Pod "pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009" satisfied condition "success or failure" Mar 17 11:28:45.898: INFO: Trying to get logs from node kube pod pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009 container projected-secret-volume-test: STEP: delete the pod Mar 17 11:28:46.086: INFO: Waiting for pod pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009 to disappear Mar 17 11:28:46.096: INFO: Pod pod-projected-secrets-cdd7756e-48a7-11e9-bf64-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:28:46.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dz5dc" for this suite. Mar 17 11:28:52.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:28:52.212: INFO: namespace: e2e-tests-projected-dz5dc, resource: bindings, ignored listing per whitelist Mar 17 11:28:52.220: INFO: namespace e2e-tests-projected-dz5dc deletion completed in 6.121179749s • [SLOW TEST:16.448 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:28:52.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 17 11:28:52.380: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-jjz9z,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjz9z/configmaps/e2e-watch-test-resource-version,UID:d77e753f-48a7-11e9-a072-fa163e921bae,ResourceVersion:1289421,Generation:0,CreationTimestamp:2019-03-17 11:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 17 11:28:52.380: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-jjz9z,SelfLink:/api/v1/namespaces/e2e-tests-watch-jjz9z/configmaps/e2e-watch-test-resource-version,UID:d77e753f-48a7-11e9-a072-fa163e921bae,ResourceVersion:1289422,Generation:0,CreationTimestamp:2019-03-17 11:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 17 11:28:52.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jjz9z" for this suite. Mar 17 11:28:58.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 17 11:28:58.441: INFO: namespace: e2e-tests-watch-jjz9z, resource: bindings, ignored listing per whitelist Mar 17 11:28:58.509: INFO: namespace e2e-tests-watch-jjz9z deletion completed in 6.12556713s • [SLOW TEST:6.289 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 17 11:28:58.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 17 11:28:58.803: INFO: (0) /api/v1/nodes/kube/proxy/logs/:
alternatives.log
apt/
... (200; 9.549226ms)
Mar 17 11:28:58.808: INFO: (1) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 4.238419ms)
Mar 17 11:28:58.812: INFO: (2) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 4.844543ms)
Mar 17 11:28:58.816: INFO: (3) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 3.054181ms)
Mar 17 11:28:58.818: INFO: (4) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 2.599128ms)
Mar 17 11:28:58.820: INFO: (5) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 2.259479ms)
Mar 17 11:28:58.824: INFO: (6) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 3.259799ms)
Mar 17 11:28:58.826: INFO: (7) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 2.400626ms)
Mar 17 11:28:58.829: INFO: (8) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 3.296001ms)
Mar 17 11:28:58.833: INFO: (9) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 3.102646ms)
Mar 17 11:28:58.836: INFO: (10) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 3.17668ms)
Mar 17 11:28:58.839: INFO: (11) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 3.153873ms)
Mar 17 11:28:58.842: INFO: (12) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 2.683952ms)
Mar 17 11:28:58.847: INFO: (13) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 4.95695ms)
Mar 17 11:28:58.853: INFO: (14) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 6.168807ms)
Mar 17 11:28:58.858: INFO: (15) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 5.22145ms)
Mar 17 11:28:58.862: INFO: (16) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 4.104582ms)
Mar 17 11:28:58.865: INFO: (17) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 2.707547ms)
Mar 17 11:28:58.867: INFO: (18) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 2.164074ms)
Mar 17 11:28:58.870: INFO: (19) /api/v1/nodes/kube/proxy/logs/: 
alternatives.log
apt/
... (200; 2.317163ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:28:58.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-4nlvb" for this suite.
Mar 17 11:29:09.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:29:09.096: INFO: namespace: e2e-tests-proxy-4nlvb, resource: bindings, ignored listing per whitelist
Mar 17 11:29:09.126: INFO: namespace e2e-tests-proxy-4nlvb deletion completed in 10.253555564s

• [SLOW TEST:10.617 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:29:09.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Mar 17 11:29:09.362: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:29:16.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-7fwg9" for this suite.
Mar 17 11:29:26.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:29:26.835: INFO: namespace: e2e-tests-init-container-7fwg9, resource: bindings, ignored listing per whitelist
Mar 17 11:29:26.914: INFO: namespace e2e-tests-init-container-7fwg9 deletion completed in 10.149655732s

• [SLOW TEST:17.788 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:29:26.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Mar 17 11:29:27.259: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 17 11:29:27.264: INFO: Waiting for terminating namespaces to be deleted...
Mar 17 11:29:42.269: INFO: 
Logging pods the kubelet thinks is on node kube before test
Mar 17 11:29:42.276: INFO: kube-scheduler-kube from kube-system started at  (0 container statuses recorded)
Mar 17 11:29:42.276: INFO: weave-net-47d2b from kube-system started at 2019-03-09 11:38:24 +0000 UTC (2 container statuses recorded)
Mar 17 11:29:42.276: INFO: 	Container weave ready: true, restart count 0
Mar 17 11:29:42.276: INFO: 	Container weave-npc ready: true, restart count 0
Mar 17 11:29:42.276: INFO: coredns-86c58d9df4-xv8sl from kube-system started at 2019-03-09 11:38:41 +0000 UTC (1 container statuses recorded)
Mar 17 11:29:42.276: INFO: 	Container coredns ready: true, restart count 0
Mar 17 11:29:42.276: INFO: coredns-86c58d9df4-lrf5x from kube-system started at 2019-03-09 11:38:41 +0000 UTC (1 container statuses recorded)
Mar 17 11:29:42.276: INFO: 	Container coredns ready: true, restart count 0
Mar 17 11:29:42.276: INFO: kube-controller-manager-kube from kube-system started at  (0 container statuses recorded)
Mar 17 11:29:42.276: INFO: kube-apiserver-kube from kube-system started at  (0 container statuses recorded)
Mar 17 11:29:42.276: INFO: kube-proxy-6jlw8 from kube-system started at 2019-03-09 11:38:22 +0000 UTC (1 container statuses recorded)
Mar 17 11:29:42.276: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 17 11:29:42.276: INFO: etcd-kube from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f8d6a243-48a7-11e9-bf64-0242ac110009 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f8d6a243-48a7-11e9-bf64-0242ac110009 off the node kube
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f8d6a243-48a7-11e9-bf64-0242ac110009
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:29:54.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-r57s2" for this suite.
Mar 17 11:30:14.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:30:14.618: INFO: namespace: e2e-tests-sched-pred-r57s2, resource: bindings, ignored listing per whitelist
Mar 17 11:30:14.622: INFO: namespace e2e-tests-sched-pred-r57s2 deletion completed in 20.102565998s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:47.707 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:30:14.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-089b33d8-48a8-11e9-bf64-0242ac110009
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:30:21.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pzl9q" for this suite.
Mar 17 11:30:43.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:30:43.914: INFO: namespace: e2e-tests-configmap-pzl9q, resource: bindings, ignored listing per whitelist
Mar 17 11:30:43.917: INFO: namespace e2e-tests-configmap-pzl9q deletion completed in 22.099664169s

• [SLOW TEST:29.295 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:30:43.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Mar 17 11:30:44.026: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:30:52.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-2gzvc" for this suite.
Mar 17 11:31:14.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:31:14.292: INFO: namespace: e2e-tests-init-container-2gzvc, resource: bindings, ignored listing per whitelist
Mar 17 11:31:14.334: INFO: namespace e2e-tests-init-container-2gzvc deletion completed in 22.087018479s

• [SLOW TEST:30.417 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:31:14.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0317 11:31:45.213513       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 17 11:31:45.213: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:31:45.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vhtk8" for this suite.
Mar 17 11:31:53.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:31:53.259: INFO: namespace: e2e-tests-gc-vhtk8, resource: bindings, ignored listing per whitelist
Mar 17 11:31:53.290: INFO: namespace e2e-tests-gc-vhtk8 deletion completed in 8.073274457s

• [SLOW TEST:38.956 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:31:53.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0317 11:31:54.577197       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 17 11:31:54.577: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:31:54.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-qkz26" for this suite.
Mar 17 11:32:00.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:32:00.657: INFO: namespace: e2e-tests-gc-qkz26, resource: bindings, ignored listing per whitelist
Mar 17 11:32:00.662: INFO: namespace e2e-tests-gc-qkz26 deletion completed in 6.082277213s

• [SLOW TEST:7.371 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:32:00.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-vrxkr
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-vrxkr
STEP: Deleting pre-stop pod
Mar 17 11:32:16.462: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:32:16.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-vrxkr" for this suite.
Mar 17 11:32:56.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:32:56.546: INFO: namespace: e2e-tests-prestop-vrxkr, resource: bindings, ignored listing per whitelist
Mar 17 11:32:56.625: INFO: namespace e2e-tests-prestop-vrxkr deletion completed in 40.146839246s

• [SLOW TEST:55.964 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:32:56.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Mar 17 11:33:03.192: INFO: 0 pods remaining
Mar 17 11:33:03.192: INFO: 0 pods has nil DeletionTimestamp
Mar 17 11:33:03.192: INFO: 
STEP: Gathering metrics
W0317 11:33:03.721488       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 17 11:33:03.721: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:33:03.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4h8qr" for this suite.
Mar 17 11:33:11.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:33:11.814: INFO: namespace: e2e-tests-gc-4h8qr, resource: bindings, ignored listing per whitelist
Mar 17 11:33:11.884: INFO: namespace e2e-tests-gc-4h8qr deletion completed in 8.160767421s

• [SLOW TEST:15.258 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:33:11.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 11:33:12.182: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Mar 17 11:33:12.236: INFO: Number of nodes with available pods: 0
Mar 17 11:33:12.236: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Mar 17 11:33:12.285: INFO: Number of nodes with available pods: 0
Mar 17 11:33:12.285: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:13.288: INFO: Number of nodes with available pods: 0
Mar 17 11:33:13.288: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:14.290: INFO: Number of nodes with available pods: 0
Mar 17 11:33:14.290: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:15.288: INFO: Number of nodes with available pods: 0
Mar 17 11:33:15.288: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:16.287: INFO: Number of nodes with available pods: 1
Mar 17 11:33:16.288: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Mar 17 11:33:16.320: INFO: Number of nodes with available pods: 1
Mar 17 11:33:16.320: INFO: Number of running nodes: 0, number of available pods: 1
Mar 17 11:33:17.323: INFO: Number of nodes with available pods: 0
Mar 17 11:33:17.323: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Mar 17 11:33:17.343: INFO: Number of nodes with available pods: 0
Mar 17 11:33:17.343: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:18.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:18.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:19.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:19.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:20.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:20.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:21.345: INFO: Number of nodes with available pods: 0
Mar 17 11:33:21.345: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:22.345: INFO: Number of nodes with available pods: 0
Mar 17 11:33:22.345: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:23.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:23.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:24.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:24.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:25.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:25.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:26.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:26.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:27.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:27.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:29.299: INFO: Number of nodes with available pods: 0
Mar 17 11:33:29.299: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:29.357: INFO: Number of nodes with available pods: 0
Mar 17 11:33:29.357: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:30.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:30.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:31.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:31.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:32.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:32.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:33.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:33.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:34.348: INFO: Number of nodes with available pods: 0
Mar 17 11:33:34.349: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:35.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:35.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:37.279: INFO: Number of nodes with available pods: 0
Mar 17 11:33:37.279: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:37.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:37.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:38.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:38.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:39.348: INFO: Number of nodes with available pods: 0
Mar 17 11:33:39.348: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:40.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:40.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:41.418: INFO: Number of nodes with available pods: 0
Mar 17 11:33:41.418: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:42.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:42.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:43.346: INFO: Number of nodes with available pods: 0
Mar 17 11:33:43.346: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:44.348: INFO: Number of nodes with available pods: 0
Mar 17 11:33:44.348: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:45.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:45.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:46.378: INFO: Number of nodes with available pods: 0
Mar 17 11:33:46.378: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:47.348: INFO: Number of nodes with available pods: 0
Mar 17 11:33:47.348: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:48.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:48.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:49.399: INFO: Number of nodes with available pods: 0
Mar 17 11:33:49.399: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:50.377: INFO: Number of nodes with available pods: 0
Mar 17 11:33:50.378: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:51.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:51.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:52.347: INFO: Number of nodes with available pods: 0
Mar 17 11:33:52.347: INFO: Node kube is running more than one daemon pod
Mar 17 11:33:53.349: INFO: Number of nodes with available pods: 1
Mar 17 11:33:53.349: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-hzqf2, will wait for the garbage collector to delete the pods
Mar 17 11:33:53.429: INFO: Deleting DaemonSet.extensions daemon-set took: 17.322094ms
Mar 17 11:33:53.530: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.194589ms
Mar 17 11:34:37.961: INFO: Number of nodes with available pods: 0
Mar 17 11:34:37.961: INFO: Number of running nodes: 0, number of available pods: 0
Mar 17 11:34:37.965: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hzqf2/daemonsets","resourceVersion":"1290291"},"items":null}

Mar 17 11:34:37.967: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hzqf2/pods","resourceVersion":"1290291"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:34:37.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hzqf2" for this suite.
Mar 17 11:34:44.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:34:44.121: INFO: namespace: e2e-tests-daemonsets-hzqf2, resource: bindings, ignored listing per whitelist
Mar 17 11:34:44.152: INFO: namespace e2e-tests-daemonsets-hzqf2 deletion completed in 6.151760932s

• [SLOW TEST:92.268 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:34:44.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 11:34:44.254: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Mar 17 11:34:44.364: INFO: Pod name sample-pod: Found 0 pods out of 1
Mar 17 11:34:49.370: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 17 11:34:49.370: INFO: Creating deployment "test-rolling-update-deployment"
Mar 17 11:34:49.378: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Mar 17 11:34:49.396: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Mar 17 11:34:51.401: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Mar 17 11:34:51.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688419289, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688419289, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688419289, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688419289, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-68b55d7bc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 17 11:34:53.410: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Mar 17 11:34:53.424: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-jqfgh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jqfgh/deployments/test-rolling-update-deployment,UID:ac4aef4a-48a8-11e9-a072-fa163e921bae,ResourceVersion:1290373,Generation:1,CreationTimestamp:2019-03-17 11:34:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-03-17 11:34:49 +0000 UTC 2019-03-17 11:34:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-03-17 11:34:52 +0000 UTC 2019-03-17 11:34:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-68b55d7bc6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Mar 17 11:34:53.428: INFO: New ReplicaSet "test-rolling-update-deployment-68b55d7bc6" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-68b55d7bc6,GenerateName:,Namespace:e2e-tests-deployment-jqfgh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jqfgh/replicasets/test-rolling-update-deployment-68b55d7bc6,UID:ac50a4ce-48a8-11e9-a072-fa163e921bae,ResourceVersion:1290364,Generation:1,CreationTimestamp:2019-03-17 11:34:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ac4aef4a-48a8-11e9-a072-fa163e921bae 0xc001c15167 0xc001c15168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Mar 17 11:34:53.428: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Mar 17 11:34:53.428: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-jqfgh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jqfgh/replicasets/test-rolling-update-controller,UID:a93e1d43-48a8-11e9-a072-fa163e921bae,ResourceVersion:1290372,Generation:2,CreationTimestamp:2019-03-17 11:34:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ac4aef4a-48a8-11e9-a072-fa163e921bae 0xc001c14f37 0xc001c14f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Mar 17 11:34:53.434: INFO: Pod "test-rolling-update-deployment-68b55d7bc6-hl4m8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-68b55d7bc6-hl4m8,GenerateName:test-rolling-update-deployment-68b55d7bc6-,Namespace:e2e-tests-deployment-jqfgh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jqfgh/pods/test-rolling-update-deployment-68b55d7bc6-hl4m8,UID:ac51d30d-48a8-11e9-a072-fa163e921bae,ResourceVersion:1290363,Generation:0,CreationTimestamp:2019-03-17 11:34:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-68b55d7bc6 ac50a4ce-48a8-11e9-a072-fa163e921bae 0xc00181e467 0xc00181e468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-52ps6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-52ps6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-52ps6 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00181e5a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00181e660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:34:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:34:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:34:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:34:49 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.5,StartTime:2019-03-17 11:34:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-03-17 11:34:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://bb809ff2b505f2826bb034228ceab5f197caf79070f80dba0cfcc2ca8129609f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:34:53.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-jqfgh" for this suite.
Mar 17 11:34:59.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:34:59.499: INFO: namespace: e2e-tests-deployment-jqfgh, resource: bindings, ignored listing per whitelist
Mar 17 11:34:59.530: INFO: namespace e2e-tests-deployment-jqfgh deletion completed in 6.092175899s

• [SLOW TEST:15.378 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:34:59.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Mar 17 11:34:59.965: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:35:00.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6269c" for this suite.
Mar 17 11:35:06.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:35:06.058: INFO: namespace: e2e-tests-kubectl-6269c, resource: bindings, ignored listing per whitelist
Mar 17 11:35:06.112: INFO: namespace e2e-tests-kubectl-6269c deletion completed in 6.086769167s

• [SLOW TEST:6.582 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:35:06.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Mar 17 11:35:07.083: INFO: Waiting up to 5m0s for pod "client-containers-b6d6905a-48a8-11e9-bf64-0242ac110009" in namespace "e2e-tests-containers-7dp6m" to be "success or failure"
Mar 17 11:35:07.089: INFO: Pod "client-containers-b6d6905a-48a8-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.737912ms
Mar 17 11:35:09.092: INFO: Pod "client-containers-b6d6905a-48a8-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008618778s
Mar 17 11:35:11.096: INFO: Pod "client-containers-b6d6905a-48a8-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012409503s
STEP: Saw pod success
Mar 17 11:35:11.096: INFO: Pod "client-containers-b6d6905a-48a8-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:35:11.100: INFO: Trying to get logs from node kube pod client-containers-b6d6905a-48a8-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 11:35:11.138: INFO: Waiting for pod client-containers-b6d6905a-48a8-11e9-bf64-0242ac110009 to disappear
Mar 17 11:35:11.141: INFO: Pod client-containers-b6d6905a-48a8-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:35:11.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-7dp6m" for this suite.
Mar 17 11:35:17.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:35:17.175: INFO: namespace: e2e-tests-containers-7dp6m, resource: bindings, ignored listing per whitelist
Mar 17 11:35:17.317: INFO: namespace e2e-tests-containers-7dp6m deletion completed in 6.173056775s

• [SLOW TEST:11.205 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:35:17.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Mar 17 11:35:17.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:22.696: INFO: stderr: ""
Mar 17 11:35:22.696: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 17 11:35:22.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:22.928: INFO: stderr: ""
Mar 17 11:35:22.928: INFO: stdout: "update-demo-nautilus-4pgf9 update-demo-nautilus-pn5ks "
Mar 17 11:35:22.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pgf9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:23.148: INFO: stderr: ""
Mar 17 11:35:23.148: INFO: stdout: ""
Mar 17 11:35:23.148: INFO: update-demo-nautilus-4pgf9 is created but not running
Mar 17 11:35:28.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:28.217: INFO: stderr: ""
Mar 17 11:35:28.217: INFO: stdout: "update-demo-nautilus-4pgf9 update-demo-nautilus-pn5ks "
Mar 17 11:35:28.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pgf9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:28.282: INFO: stderr: ""
Mar 17 11:35:28.282: INFO: stdout: "true"
Mar 17 11:35:28.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pgf9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:28.358: INFO: stderr: ""
Mar 17 11:35:28.358: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 17 11:35:28.358: INFO: validating pod update-demo-nautilus-4pgf9
Mar 17 11:35:28.381: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 17 11:35:28.381: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 17 11:35:28.381: INFO: update-demo-nautilus-4pgf9 is verified up and running
Mar 17 11:35:28.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn5ks -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:28.444: INFO: stderr: ""
Mar 17 11:35:28.444: INFO: stdout: "true"
Mar 17 11:35:28.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pn5ks -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:28.507: INFO: stderr: ""
Mar 17 11:35:28.507: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 17 11:35:28.507: INFO: validating pod update-demo-nautilus-pn5ks
Mar 17 11:35:28.510: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 17 11:35:28.510: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 17 11:35:28.510: INFO: update-demo-nautilus-pn5ks is verified up and running
STEP: using delete to clean up resources
Mar 17 11:35:28.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:28.591: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 11:35:28.591: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar 17 11:35:28.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-5pbpn'
Mar 17 11:35:28.680: INFO: stderr: "No resources found.\n"
Mar 17 11:35:28.680: INFO: stdout: ""
Mar 17 11:35:28.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-5pbpn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 17 11:35:28.754: INFO: stderr: ""
Mar 17 11:35:28.754: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:35:28.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5pbpn" for this suite.
Mar 17 11:35:50.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:35:50.855: INFO: namespace: e2e-tests-kubectl-5pbpn, resource: bindings, ignored listing per whitelist
Mar 17 11:35:50.914: INFO: namespace e2e-tests-kubectl-5pbpn deletion completed in 22.146655568s

• [SLOW TEST:33.597 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:35:50.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Mar 17 11:35:51.048: INFO: Pod name pod-release: Found 0 pods out of 1
Mar 17 11:35:56.052: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:35:57.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-5s78p" for this suite.
Mar 17 11:36:03.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:36:03.273: INFO: namespace: e2e-tests-replication-controller-5s78p, resource: bindings, ignored listing per whitelist
Mar 17 11:36:03.293: INFO: namespace e2e-tests-replication-controller-5s78p deletion completed in 6.207142278s

• [SLOW TEST:12.379 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:36:03.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 11:36:03.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8b35e4e-48a8-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-cl22p" to be "success or failure"
Mar 17 11:36:03.979: INFO: Pod "downwardapi-volume-d8b35e4e-48a8-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 88.650644ms
Mar 17 11:36:06.092: INFO: Pod "downwardapi-volume-d8b35e4e-48a8-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20190528s
Mar 17 11:36:08.096: INFO: Pod "downwardapi-volume-d8b35e4e-48a8-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205509245s
STEP: Saw pod success
Mar 17 11:36:08.096: INFO: Pod "downwardapi-volume-d8b35e4e-48a8-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:36:08.102: INFO: Trying to get logs from node kube pod downwardapi-volume-d8b35e4e-48a8-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 11:36:08.253: INFO: Waiting for pod downwardapi-volume-d8b35e4e-48a8-11e9-bf64-0242ac110009 to disappear
Mar 17 11:36:08.273: INFO: Pod downwardapi-volume-d8b35e4e-48a8-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:36:08.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cl22p" for this suite.
Mar 17 11:36:14.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:36:14.473: INFO: namespace: e2e-tests-downward-api-cl22p, resource: bindings, ignored listing per whitelist
Mar 17 11:36:14.508: INFO: namespace e2e-tests-downward-api-cl22p deletion completed in 6.229707198s

• [SLOW TEST:11.214 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:36:14.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Mar 17 11:36:21.228: INFO: Successfully updated pod "labelsupdatedf231d9a-48a8-11e9-bf64-0242ac110009"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:36:23.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gp4nl" for this suite.
Mar 17 11:36:47.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:36:47.342: INFO: namespace: e2e-tests-downward-api-gp4nl, resource: bindings, ignored listing per whitelist
Mar 17 11:36:47.359: INFO: namespace e2e-tests-downward-api-gp4nl deletion completed in 24.084264061s

• [SLOW TEST:32.851 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:36:47.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 17 11:36:47.535: INFO: Waiting up to 5m0s for pod "pod-f2b6833d-48a8-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-g5zcr" to be "success or failure"
Mar 17 11:36:47.539: INFO: Pod "pod-f2b6833d-48a8-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.771101ms
Mar 17 11:36:49.587: INFO: Pod "pod-f2b6833d-48a8-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051670407s
Mar 17 11:36:51.593: INFO: Pod "pod-f2b6833d-48a8-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057590386s
STEP: Saw pod success
Mar 17 11:36:51.593: INFO: Pod "pod-f2b6833d-48a8-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:36:51.595: INFO: Trying to get logs from node kube pod pod-f2b6833d-48a8-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 11:36:51.741: INFO: Waiting for pod pod-f2b6833d-48a8-11e9-bf64-0242ac110009 to disappear
Mar 17 11:36:51.748: INFO: Pod pod-f2b6833d-48a8-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:36:51.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-g5zcr" for this suite.
Mar 17 11:36:57.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:36:57.857: INFO: namespace: e2e-tests-emptydir-g5zcr, resource: bindings, ignored listing per whitelist
Mar 17 11:36:57.891: INFO: namespace e2e-tests-emptydir-g5zcr deletion completed in 6.140877203s

• [SLOW TEST:10.532 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:36:57.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-f8ffd8b3-48a8-11e9-bf64-0242ac110009
Mar 17 11:36:58.085: INFO: Pod name my-hostname-basic-f8ffd8b3-48a8-11e9-bf64-0242ac110009: Found 0 pods out of 1
Mar 17 11:37:03.089: INFO: Pod name my-hostname-basic-f8ffd8b3-48a8-11e9-bf64-0242ac110009: Found 1 pods out of 1
Mar 17 11:37:03.089: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f8ffd8b3-48a8-11e9-bf64-0242ac110009" are running
Mar 17 11:37:03.092: INFO: Pod "my-hostname-basic-f8ffd8b3-48a8-11e9-bf64-0242ac110009-pdmmg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-03-17 11:36:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-03-17 11:37:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-03-17 11:37:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-03-17 11:36:58 +0000 UTC Reason: Message:}])
Mar 17 11:37:03.092: INFO: Trying to dial the pod
Mar 17 11:37:08.111: INFO: Controller my-hostname-basic-f8ffd8b3-48a8-11e9-bf64-0242ac110009: Got expected result from replica 1 [my-hostname-basic-f8ffd8b3-48a8-11e9-bf64-0242ac110009-pdmmg]: "my-hostname-basic-f8ffd8b3-48a8-11e9-bf64-0242ac110009-pdmmg", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:37:08.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-vg8nb" for this suite.
Mar 17 11:37:14.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:37:14.169: INFO: namespace: e2e-tests-replication-controller-vg8nb, resource: bindings, ignored listing per whitelist
Mar 17 11:37:14.221: INFO: namespace e2e-tests-replication-controller-vg8nb deletion completed in 6.107231359s

• [SLOW TEST:16.330 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:37:14.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:37:14.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-f5gxp" for this suite.
Mar 17 11:37:38.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:37:38.865: INFO: namespace: e2e-tests-pods-f5gxp, resource: bindings, ignored listing per whitelist
Mar 17 11:37:38.901: INFO: namespace e2e-tests-pods-f5gxp deletion completed in 24.403274325s

• [SLOW TEST:24.679 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:37:38.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Mar 17 11:37:39.020: INFO: Waiting up to 5m0s for pod "downward-api-1166c929-48a9-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-ltvz6" to be "success or failure"
Mar 17 11:37:39.027: INFO: Pod "downward-api-1166c929-48a9-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.920424ms
Mar 17 11:37:41.030: INFO: Pod "downward-api-1166c929-48a9-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010002349s
Mar 17 11:37:43.035: INFO: Pod "downward-api-1166c929-48a9-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014901486s
STEP: Saw pod success
Mar 17 11:37:43.035: INFO: Pod "downward-api-1166c929-48a9-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:37:43.037: INFO: Trying to get logs from node kube pod downward-api-1166c929-48a9-11e9-bf64-0242ac110009 container dapi-container: 
STEP: delete the pod
Mar 17 11:37:43.063: INFO: Waiting for pod downward-api-1166c929-48a9-11e9-bf64-0242ac110009 to disappear
Mar 17 11:37:43.070: INFO: Pod downward-api-1166c929-48a9-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:37:43.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ltvz6" for this suite.
Mar 17 11:37:51.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:37:51.140: INFO: namespace: e2e-tests-downward-api-ltvz6, resource: bindings, ignored listing per whitelist
Mar 17 11:37:51.167: INFO: namespace e2e-tests-downward-api-ltvz6 deletion completed in 8.084624765s

• [SLOW TEST:12.266 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:37:51.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-pg4kw
Mar 17 11:37:55.301: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-pg4kw
STEP: checking the pod's current state and verifying that restartCount is present
Mar 17 11:37:55.303: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:41:55.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pg4kw" for this suite.
Mar 17 11:42:01.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:42:01.754: INFO: namespace: e2e-tests-container-probe-pg4kw, resource: bindings, ignored listing per whitelist
Mar 17 11:42:01.756: INFO: namespace e2e-tests-container-probe-pg4kw deletion completed in 6.082715524s

• [SLOW TEST:250.589 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:42:01.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0317 11:42:42.593919       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 17 11:42:42.593: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:42:42.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-t7gqg" for this suite.
Mar 17 11:42:54.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:42:54.690: INFO: namespace: e2e-tests-gc-t7gqg, resource: bindings, ignored listing per whitelist
Mar 17 11:42:54.698: INFO: namespace e2e-tests-gc-t7gqg deletion completed in 12.102695619s

• [SLOW TEST:52.942 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:42:54.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9p27n
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 17 11:42:54.953: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 17 11:43:13.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9p27n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 11:43:13.124: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 11:43:13.346: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:43:13.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-9p27n" for this suite.
Mar 17 11:43:35.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:43:35.420: INFO: namespace: e2e-tests-pod-network-test-9p27n, resource: bindings, ignored listing per whitelist
Mar 17 11:43:35.438: INFO: namespace e2e-tests-pod-network-test-9p27n deletion completed in 22.08651462s

• [SLOW TEST:40.739 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:43:35.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Mar 17 11:43:35.600: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Mar 17 11:43:35.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:35.981: INFO: stderr: ""
Mar 17 11:43:35.981: INFO: stdout: "service/redis-slave created\n"
Mar 17 11:43:35.981: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Mar 17 11:43:35.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:36.310: INFO: stderr: ""
Mar 17 11:43:36.310: INFO: stdout: "service/redis-master created\n"
Mar 17 11:43:36.310: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Mar 17 11:43:36.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:36.598: INFO: stderr: ""
Mar 17 11:43:36.598: INFO: stdout: "service/frontend created\n"
Mar 17 11:43:36.598: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Mar 17 11:43:36.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:36.796: INFO: stderr: ""
Mar 17 11:43:36.797: INFO: stdout: "deployment.extensions/frontend created\n"
Mar 17 11:43:36.797: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Mar 17 11:43:36.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:37.019: INFO: stderr: ""
Mar 17 11:43:37.019: INFO: stdout: "deployment.extensions/redis-master created\n"
Mar 17 11:43:37.019: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Mar 17 11:43:37.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:37.260: INFO: stderr: ""
Mar 17 11:43:37.260: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Mar 17 11:43:37.260: INFO: Waiting for all frontend pods to be Running.
Mar 17 11:43:47.311: INFO: Waiting for frontend to serve content.
Mar 17 11:43:48.467: INFO: Trying to add a new entry to the guestbook.
Mar 17 11:43:48.493: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Mar 17 11:43:48.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:48.745: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 11:43:48.745: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Mar 17 11:43:48.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:49.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 11:43:49.070: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Mar 17 11:43:49.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:49.251: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 11:43:49.251: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar 17 11:43:49.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:49.400: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 11:43:49.400: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar 17 11:43:49.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:49.640: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 11:43:49.640: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Mar 17 11:43:49.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ldq24'
Mar 17 11:43:49.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 11:43:49.820: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:43:49.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ldq24" for this suite.
Mar 17 11:44:29.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:44:29.892: INFO: namespace: e2e-tests-kubectl-ldq24, resource: bindings, ignored listing per whitelist
Mar 17 11:44:29.954: INFO: namespace e2e-tests-kubectl-ldq24 deletion completed in 40.126355776s

• [SLOW TEST:54.516 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:44:29.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-dhhb
STEP: Creating a pod to test atomic-volume-subpath
Mar 17 11:44:30.116: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dhhb" in namespace "e2e-tests-subpath-fl484" to be "success or failure"
Mar 17 11:44:30.149: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.403895ms
Mar 17 11:44:32.156: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040191821s
Mar 17 11:44:34.229: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112913732s
Mar 17 11:44:36.318: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20169788s
Mar 17 11:44:38.341: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 8.224548309s
Mar 17 11:44:40.345: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 10.228378333s
Mar 17 11:44:42.444: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 12.328209788s
Mar 17 11:44:44.451: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 14.335089813s
Mar 17 11:44:46.454: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 16.338231901s
Mar 17 11:44:48.459: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 18.342363691s
Mar 17 11:44:50.463: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 20.346923266s
Mar 17 11:44:52.467: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 22.350760764s
Mar 17 11:44:54.478: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 24.361390034s
Mar 17 11:44:56.481: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 26.364528489s
Mar 17 11:44:58.577: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Running", Reason="", readiness=false. Elapsed: 28.460778588s
Mar 17 11:45:00.580: INFO: Pod "pod-subpath-test-configmap-dhhb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.46416071s
STEP: Saw pod success
Mar 17 11:45:00.580: INFO: Pod "pod-subpath-test-configmap-dhhb" satisfied condition "success or failure"
Mar 17 11:45:00.582: INFO: Trying to get logs from node kube pod pod-subpath-test-configmap-dhhb container test-container-subpath-configmap-dhhb: 
STEP: delete the pod
Mar 17 11:45:00.779: INFO: Waiting for pod pod-subpath-test-configmap-dhhb to disappear
Mar 17 11:45:00.782: INFO: Pod pod-subpath-test-configmap-dhhb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dhhb
Mar 17 11:45:00.782: INFO: Deleting pod "pod-subpath-test-configmap-dhhb" in namespace "e2e-tests-subpath-fl484"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:45:00.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fl484" for this suite.
Mar 17 11:45:06.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:45:06.943: INFO: namespace: e2e-tests-subpath-fl484, resource: bindings, ignored listing per whitelist
Mar 17 11:45:06.982: INFO: namespace e2e-tests-subpath-fl484 deletion completed in 6.195772024s

• [SLOW TEST:37.028 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:45:06.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Mar 17 11:45:07.255: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hnr9w,SelfLink:/api/v1/namespaces/e2e-tests-watch-hnr9w/configmaps/e2e-watch-test-watch-closed,UID:1c902d28-48aa-11e9-a072-fa163e921bae,ResourceVersion:1291904,Generation:0,CreationTimestamp:2019-03-17 11:45:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar 17 11:45:07.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hnr9w,SelfLink:/api/v1/namespaces/e2e-tests-watch-hnr9w/configmaps/e2e-watch-test-watch-closed,UID:1c902d28-48aa-11e9-a072-fa163e921bae,ResourceVersion:1291905,Generation:0,CreationTimestamp:2019-03-17 11:45:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Mar 17 11:45:07.396: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hnr9w,SelfLink:/api/v1/namespaces/e2e-tests-watch-hnr9w/configmaps/e2e-watch-test-watch-closed,UID:1c902d28-48aa-11e9-a072-fa163e921bae,ResourceVersion:1291906,Generation:0,CreationTimestamp:2019-03-17 11:45:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar 17 11:45:07.396: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hnr9w,SelfLink:/api/v1/namespaces/e2e-tests-watch-hnr9w/configmaps/e2e-watch-test-watch-closed,UID:1c902d28-48aa-11e9-a072-fa163e921bae,ResourceVersion:1291907,Generation:0,CreationTimestamp:2019-03-17 11:45:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:45:07.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-hnr9w" for this suite.
Mar 17 11:45:13.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:45:13.551: INFO: namespace: e2e-tests-watch-hnr9w, resource: bindings, ignored listing per whitelist
Mar 17 11:45:13.711: INFO: namespace e2e-tests-watch-hnr9w deletion completed in 6.308573932s

• [SLOW TEST:6.728 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:45:13.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Mar 17 11:45:13.957: INFO: Waiting up to 5m0s for pod "downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-h2lb9" to be "success or failure"
Mar 17 11:45:13.983: INFO: Pod "downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.901558ms
Mar 17 11:45:16.001: INFO: Pod "downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043337994s
Mar 17 11:45:18.008: INFO: Pod "downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050834026s
Mar 17 11:45:20.058: INFO: Pod "downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100542997s
STEP: Saw pod success
Mar 17 11:45:20.058: INFO: Pod "downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:45:20.065: INFO: Trying to get logs from node kube pod downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009 container dapi-container: 
STEP: delete the pod
Mar 17 11:45:20.103: INFO: Waiting for pod downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009 to disappear
Mar 17 11:45:20.138: INFO: Pod downward-api-208ffaf8-48aa-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:45:20.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-h2lb9" for this suite.
Mar 17 11:45:26.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:45:26.396: INFO: namespace: e2e-tests-downward-api-h2lb9, resource: bindings, ignored listing per whitelist
Mar 17 11:45:26.440: INFO: namespace e2e-tests-downward-api-h2lb9 deletion completed in 6.286483391s

• [SLOW TEST:12.729 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:45:26.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0317 11:45:36.660466       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 17 11:45:36.660: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:45:36.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4bxcf" for this suite.
Mar 17 11:45:42.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:45:42.845: INFO: namespace: e2e-tests-gc-4bxcf, resource: bindings, ignored listing per whitelist
Mar 17 11:45:42.857: INFO: namespace e2e-tests-gc-4bxcf deletion completed in 6.186786662s

• [SLOW TEST:16.417 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:45:42.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-31d98b8e-48aa-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 11:45:43.007: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31da6af0-48aa-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-p2j8x" to be "success or failure"
Mar 17 11:45:43.021: INFO: Pod "pod-projected-configmaps-31da6af0-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.021635ms
Mar 17 11:45:45.024: INFO: Pod "pod-projected-configmaps-31da6af0-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016357619s
Mar 17 11:45:47.030: INFO: Pod "pod-projected-configmaps-31da6af0-48aa-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022726153s
STEP: Saw pod success
Mar 17 11:45:47.030: INFO: Pod "pod-projected-configmaps-31da6af0-48aa-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:45:47.035: INFO: Trying to get logs from node kube pod pod-projected-configmaps-31da6af0-48aa-11e9-bf64-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 17 11:45:47.269: INFO: Waiting for pod pod-projected-configmaps-31da6af0-48aa-11e9-bf64-0242ac110009 to disappear
Mar 17 11:45:47.285: INFO: Pod pod-projected-configmaps-31da6af0-48aa-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:45:47.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p2j8x" for this suite.
Mar 17 11:45:53.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:45:53.400: INFO: namespace: e2e-tests-projected-p2j8x, resource: bindings, ignored listing per whitelist
Mar 17 11:45:53.419: INFO: namespace e2e-tests-projected-p2j8x deletion completed in 6.129113766s

• [SLOW TEST:10.562 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:45:53.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Mar 17 11:45:53.571: INFO: PodSpec: initContainers in spec.initContainers
Mar 17 11:46:37.423: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-382e54b7-48aa-11e9-bf64-0242ac110009", GenerateName:"", Namespace:"e2e-tests-init-container-lxwwp", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-lxwwp/pods/pod-init-382e54b7-48aa-11e9-bf64-0242ac110009", UID:"3830417d-48aa-11e9-a072-fa163e921bae", ResourceVersion:"1292153", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688419953, loc:(*time.Location)(0x7b13a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"571145590"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lw4c5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001dd2c40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lw4c5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lw4c5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lw4c5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f32448), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kube", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f4ca20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f324d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f324f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f324f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f324fc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688419953, loc:(*time.Location)(0x7b13a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688419953, loc:(*time.Location)(0x7b13a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688419953, loc:(*time.Location)(0x7b13a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688419953, loc:(*time.Location)(0x7b13a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.100.7", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001da24a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026349a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002634a10)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://7c65215da9ef8445d242e1e81d070f4ac6dd91c30eff9e779ac09185ee551aa2"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001da24e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001da24c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:46:37.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-lxwwp" for this suite.
Mar 17 11:46:59.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:46:59.671: INFO: namespace: e2e-tests-init-container-lxwwp, resource: bindings, ignored listing per whitelist
Mar 17 11:46:59.686: INFO: namespace e2e-tests-init-container-lxwwp deletion completed in 22.249812012s

• [SLOW TEST:66.266 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:46:59.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-8mxgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8mxgr to expose endpoints map[]
Mar 17 11:46:59.844: INFO: Get endpoints failed (2.230243ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Mar 17 11:47:00.849: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8mxgr exposes endpoints map[] (1.007386292s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-8mxgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8mxgr to expose endpoints map[pod1:[80]]
Mar 17 11:47:04.050: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8mxgr exposes endpoints map[pod1:[80]] (3.1819957s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-8mxgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8mxgr to expose endpoints map[pod1:[80] pod2:[80]]
Mar 17 11:47:07.184: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8mxgr exposes endpoints map[pod1:[80] pod2:[80]] (3.121665746s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-8mxgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8mxgr to expose endpoints map[pod2:[80]]
Mar 17 11:47:07.256: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8mxgr exposes endpoints map[pod2:[80]] (64.792879ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-8mxgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8mxgr to expose endpoints map[]
Mar 17 11:47:07.273: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8mxgr exposes endpoints map[] (6.781945ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:47:07.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-8mxgr" for this suite.
Mar 17 11:47:29.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:47:29.348: INFO: namespace: e2e-tests-services-8mxgr, resource: bindings, ignored listing per whitelist
Mar 17 11:47:29.455: INFO: namespace e2e-tests-services-8mxgr deletion completed in 22.139212587s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:29.769 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:47:29.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Mar 17 11:47:29.599: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:47:36.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-dczz5" for this suite.
Mar 17 11:47:42.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:47:42.187: INFO: namespace: e2e-tests-init-container-dczz5, resource: bindings, ignored listing per whitelist
Mar 17 11:47:42.250: INFO: namespace e2e-tests-init-container-dczz5 deletion completed in 6.186457388s

• [SLOW TEST:12.795 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:47:42.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Mar 17 11:47:42.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-29sg6'
Mar 17 11:47:44.811: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 17 11:47:44.811: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Mar 17 11:47:44.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-29sg6'
Mar 17 11:47:45.001: INFO: stderr: ""
Mar 17 11:47:45.001: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:47:45.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-29sg6" for this suite.
Mar 17 11:47:51.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:47:51.176: INFO: namespace: e2e-tests-kubectl-29sg6, resource: bindings, ignored listing per whitelist
Mar 17 11:47:51.184: INFO: namespace e2e-tests-kubectl-29sg6 deletion completed in 6.133083381s

• [SLOW TEST:8.934 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:47:51.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 11:47:55.556: INFO: Waiting up to 5m0s for pod "client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009" in namespace "e2e-tests-pods-n5lx4" to be "success or failure"
Mar 17 11:47:55.565: INFO: Pod "client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.688155ms
Mar 17 11:47:57.626: INFO: Pod "client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070830017s
Mar 17 11:47:59.629: INFO: Pod "client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073644756s
Mar 17 11:48:01.632: INFO: Pod "client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07647078s
STEP: Saw pod success
Mar 17 11:48:01.632: INFO: Pod "client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:48:01.634: INFO: Trying to get logs from node kube pod client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009 container env3cont: 
STEP: delete the pod
Mar 17 11:48:01.948: INFO: Waiting for pod client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009 to disappear
Mar 17 11:48:01.970: INFO: Pod client-envvars-80dd95c3-48aa-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:48:01.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-n5lx4" for this suite.
Mar 17 11:48:53.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:48:54.067: INFO: namespace: e2e-tests-pods-n5lx4, resource: bindings, ignored listing per whitelist
Mar 17 11:48:54.120: INFO: namespace e2e-tests-pods-n5lx4 deletion completed in 52.147809536s

• [SLOW TEST:62.937 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:48:54.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-a3e8391d-48aa-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 11:48:54.323: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3e8c5b8-48aa-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-q6h59" to be "success or failure"
Mar 17 11:48:54.328: INFO: Pod "pod-configmaps-a3e8c5b8-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367262ms
Mar 17 11:48:56.339: INFO: Pod "pod-configmaps-a3e8c5b8-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015395767s
Mar 17 11:48:58.363: INFO: Pod "pod-configmaps-a3e8c5b8-48aa-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039943434s
STEP: Saw pod success
Mar 17 11:48:58.363: INFO: Pod "pod-configmaps-a3e8c5b8-48aa-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:48:58.366: INFO: Trying to get logs from node kube pod pod-configmaps-a3e8c5b8-48aa-11e9-bf64-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Mar 17 11:48:58.435: INFO: Waiting for pod pod-configmaps-a3e8c5b8-48aa-11e9-bf64-0242ac110009 to disappear
Mar 17 11:48:58.441: INFO: Pod pod-configmaps-a3e8c5b8-48aa-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:48:58.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-q6h59" for this suite.
Mar 17 11:49:04.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:49:04.492: INFO: namespace: e2e-tests-configmap-q6h59, resource: bindings, ignored listing per whitelist
Mar 17 11:49:04.568: INFO: namespace e2e-tests-configmap-q6h59 deletion completed in 6.122539327s

• [SLOW TEST:10.447 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:49:04.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Mar 17 11:49:12.704: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 17 11:49:12.713: INFO: Pod pod-with-poststart-http-hook still exists
Mar 17 11:49:14.713: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 17 11:49:14.717: INFO: Pod pod-with-poststart-http-hook still exists
Mar 17 11:49:16.713: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 17 11:49:16.716: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:49:16.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-p59dz" for this suite.
Mar 17 11:49:38.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:49:38.754: INFO: namespace: e2e-tests-container-lifecycle-hook-p59dz, resource: bindings, ignored listing per whitelist
Mar 17 11:49:38.783: INFO: namespace e2e-tests-container-lifecycle-hook-p59dz deletion completed in 22.064133788s

• [SLOW TEST:34.216 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:49:38.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 17 11:49:43.490: INFO: Successfully updated pod "pod-update-be82313f-48aa-11e9-bf64-0242ac110009"
STEP: verifying the updated pod is in kubernetes
Mar 17 11:49:43.511: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:49:43.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-f7c2j" for this suite.
Mar 17 11:50:05.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:50:05.591: INFO: namespace: e2e-tests-pods-f7c2j, resource: bindings, ignored listing per whitelist
Mar 17 11:50:05.611: INFO: namespace e2e-tests-pods-f7c2j deletion completed in 22.096715717s

• [SLOW TEST:26.828 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:50:05.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Mar 17 11:50:05.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pt9dr'
Mar 17 11:50:05.821: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 17 11:50:05.821: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Mar 17 11:50:05.835: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Mar 17 11:50:05.871: INFO: scanned /root for discovery docs: 
Mar 17 11:50:05.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-pt9dr'
Mar 17 11:50:21.670: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Mar 17 11:50:21.670: INFO: stdout: "Created e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19\nScaling up e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Mar 17 11:50:21.670: INFO: stdout: "Created e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19\nScaling up e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Mar 17 11:50:21.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pt9dr'
Mar 17 11:50:21.787: INFO: stderr: ""
Mar 17 11:50:21.787: INFO: stdout: "e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19-9ng4t "
Mar 17 11:50:21.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19-9ng4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pt9dr'
Mar 17 11:50:21.868: INFO: stderr: ""
Mar 17 11:50:21.868: INFO: stdout: "true"
Mar 17 11:50:21.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19-9ng4t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pt9dr'
Mar 17 11:50:21.967: INFO: stderr: ""
Mar 17 11:50:21.967: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Mar 17 11:50:21.967: INFO: e2e-test-nginx-rc-bb6299311474aac145f791482fa8ef19-9ng4t is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Mar 17 11:50:21.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pt9dr'
Mar 17 11:50:22.076: INFO: stderr: ""
Mar 17 11:50:22.076: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:50:22.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pt9dr" for this suite.
Mar 17 11:50:44.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:50:44.176: INFO: namespace: e2e-tests-kubectl-pt9dr, resource: bindings, ignored listing per whitelist
Mar 17 11:50:44.176: INFO: namespace e2e-tests-kubectl-pt9dr deletion completed in 22.092749092s

• [SLOW TEST:38.565 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:50:44.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 11:50:44.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e57b0454-48aa-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-7wjjm" to be "success or failure"
Mar 17 11:50:44.343: INFO: Pod "downwardapi-volume-e57b0454-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.410706ms
Mar 17 11:50:46.349: INFO: Pod "downwardapi-volume-e57b0454-48aa-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011451163s
Mar 17 11:50:48.352: INFO: Pod "downwardapi-volume-e57b0454-48aa-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014664937s
STEP: Saw pod success
Mar 17 11:50:48.352: INFO: Pod "downwardapi-volume-e57b0454-48aa-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:50:48.355: INFO: Trying to get logs from node kube pod downwardapi-volume-e57b0454-48aa-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 11:50:48.387: INFO: Waiting for pod downwardapi-volume-e57b0454-48aa-11e9-bf64-0242ac110009 to disappear
Mar 17 11:50:48.396: INFO: Pod downwardapi-volume-e57b0454-48aa-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:50:48.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7wjjm" for this suite.
Mar 17 11:50:54.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:50:54.460: INFO: namespace: e2e-tests-projected-7wjjm, resource: bindings, ignored listing per whitelist
Mar 17 11:50:54.609: INFO: namespace e2e-tests-projected-7wjjm deletion completed in 6.210043156s

• [SLOW TEST:10.433 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:50:54.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:51:54.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ztkjk" for this suite.
Mar 17 11:52:17.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:52:17.094: INFO: namespace: e2e-tests-container-probe-ztkjk, resource: bindings, ignored listing per whitelist
Mar 17 11:52:17.104: INFO: namespace e2e-tests-container-probe-ztkjk deletion completed in 22.303754253s

• [SLOW TEST:82.494 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:52:17.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 11:52:17.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1cdaa8e4-48ab-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-nrtrx" to be "success or failure"
Mar 17 11:52:17.368: INFO: Pod "downwardapi-volume-1cdaa8e4-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07528ms
Mar 17 11:52:19.376: INFO: Pod "downwardapi-volume-1cdaa8e4-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012416817s
Mar 17 11:52:21.380: INFO: Pod "downwardapi-volume-1cdaa8e4-48ab-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015485717s
STEP: Saw pod success
Mar 17 11:52:21.380: INFO: Pod "downwardapi-volume-1cdaa8e4-48ab-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:52:21.433: INFO: Trying to get logs from node kube pod downwardapi-volume-1cdaa8e4-48ab-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 11:52:21.585: INFO: Waiting for pod downwardapi-volume-1cdaa8e4-48ab-11e9-bf64-0242ac110009 to disappear
Mar 17 11:52:21.594: INFO: Pod downwardapi-volume-1cdaa8e4-48ab-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:52:21.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nrtrx" for this suite.
Mar 17 11:52:27.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:52:27.686: INFO: namespace: e2e-tests-projected-nrtrx, resource: bindings, ignored listing per whitelist
Mar 17 11:52:27.690: INFO: namespace e2e-tests-projected-nrtrx deletion completed in 6.094432417s

• [SLOW TEST:10.587 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:52:27.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-23264163-48ab-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 11:52:27.800: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-23276eb3-48ab-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-b62hp" to be "success or failure"
Mar 17 11:52:27.857: INFO: Pod "pod-projected-secrets-23276eb3-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 56.908871ms
Mar 17 11:52:29.861: INFO: Pod "pod-projected-secrets-23276eb3-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060682592s
Mar 17 11:52:31.865: INFO: Pod "pod-projected-secrets-23276eb3-48ab-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064270032s
STEP: Saw pod success
Mar 17 11:52:31.865: INFO: Pod "pod-projected-secrets-23276eb3-48ab-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:52:31.867: INFO: Trying to get logs from node kube pod pod-projected-secrets-23276eb3-48ab-11e9-bf64-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Mar 17 11:52:31.966: INFO: Waiting for pod pod-projected-secrets-23276eb3-48ab-11e9-bf64-0242ac110009 to disappear
Mar 17 11:52:31.975: INFO: Pod pod-projected-secrets-23276eb3-48ab-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:52:31.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b62hp" for this suite.
Mar 17 11:52:38.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:52:38.029: INFO: namespace: e2e-tests-projected-b62hp, resource: bindings, ignored listing per whitelist
Mar 17 11:52:38.088: INFO: namespace e2e-tests-projected-b62hp deletion completed in 6.110837639s

• [SLOW TEST:10.397 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:52:38.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 11:52:38.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Mar 17 11:52:38.419: INFO: stderr: ""
Mar 17 11:52:38.419: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.4\", GitCommit:\"c27b913fddd1a6c480c229191a087698aa92f0b1\", GitTreeState:\"clean\", BuildDate:\"2019-03-10T12:38:54Z\", GoVersion:\"go1.11.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.4\", GitCommit:\"c27b913fddd1a6c480c229191a087698aa92f0b1\", GitTreeState:\"clean\", BuildDate:\"2019-02-28T13:30:26Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:52:38.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5z2cd" for this suite.
Mar 17 11:52:44.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:52:44.472: INFO: namespace: e2e-tests-kubectl-5z2cd, resource: bindings, ignored listing per whitelist
Mar 17 11:52:44.512: INFO: namespace e2e-tests-kubectl-5z2cd deletion completed in 6.089375156s

• [SLOW TEST:6.424 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:52:44.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 11:52:44.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-tl7s6" to be "success or failure"
Mar 17 11:52:44.690: INFO: Pod "downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 22.435085ms
Mar 17 11:52:46.906: INFO: Pod "downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238434634s
Mar 17 11:52:48.910: INFO: Pod "downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.241938805s
Mar 17 11:52:50.913: INFO: Pod "downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.244976695s
STEP: Saw pod success
Mar 17 11:52:50.913: INFO: Pod "downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:52:50.914: INFO: Trying to get logs from node kube pod downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 11:52:51.056: INFO: Waiting for pod downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009 to disappear
Mar 17 11:52:51.063: INFO: Pod downwardapi-volume-2d2f2db1-48ab-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:52:51.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tl7s6" for this suite.
Mar 17 11:52:57.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:52:57.201: INFO: namespace: e2e-tests-downward-api-tl7s6, resource: bindings, ignored listing per whitelist
Mar 17 11:52:57.217: INFO: namespace e2e-tests-downward-api-tl7s6 deletion completed in 6.151466041s

• [SLOW TEST:12.705 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:52:57.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Mar 17 11:52:57.860: INFO: Waiting up to 5m0s for pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr" in namespace "e2e-tests-svcaccounts-d7zbd" to be "success or failure"
Mar 17 11:52:57.870: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.876846ms
Mar 17 11:52:59.896: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036311635s
Mar 17 11:53:01.906: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04583623s
Mar 17 11:53:03.911: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050563633s
STEP: Saw pod success
Mar 17 11:53:03.911: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr" satisfied condition "success or failure"
Mar 17 11:53:03.912: INFO: Trying to get logs from node kube pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr container token-test: 
STEP: delete the pod
Mar 17 11:53:04.337: INFO: Waiting for pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr to disappear
Mar 17 11:53:04.349: INFO: Pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-wnrwr no longer exists
STEP: Creating a pod to test consume service account root CA
Mar 17 11:53:04.355: INFO: Waiting up to 5m0s for pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5" in namespace "e2e-tests-svcaccounts-d7zbd" to be "success or failure"
Mar 17 11:53:04.418: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5": Phase="Pending", Reason="", readiness=false. Elapsed: 63.008611ms
Mar 17 11:53:06.423: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067794894s
Mar 17 11:53:08.426: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071130564s
Mar 17 11:53:10.429: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073934854s
STEP: Saw pod success
Mar 17 11:53:10.429: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5" satisfied condition "success or failure"
Mar 17 11:53:10.432: INFO: Trying to get logs from node kube pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5 container root-ca-test: 
STEP: delete the pod
Mar 17 11:53:10.489: INFO: Waiting for pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5 to disappear
Mar 17 11:53:10.499: INFO: Pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-gd5t5 no longer exists
STEP: Creating a pod to test consume service account namespace
Mar 17 11:53:10.505: INFO: Waiting up to 5m0s for pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j" in namespace "e2e-tests-svcaccounts-d7zbd" to be "success or failure"
Mar 17 11:53:10.589: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j": Phase="Pending", Reason="", readiness=false. Elapsed: 83.691086ms
Mar 17 11:53:12.593: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08761323s
Mar 17 11:53:14.595: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090247652s
Mar 17 11:53:16.598: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092901437s
STEP: Saw pod success
Mar 17 11:53:16.598: INFO: Pod "pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j" satisfied condition "success or failure"
Mar 17 11:53:16.600: INFO: Trying to get logs from node kube pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j container namespace-test: 
STEP: delete the pod
Mar 17 11:53:16.628: INFO: Waiting for pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j to disappear
Mar 17 11:53:16.781: INFO: Pod pod-service-account-3513103e-48ab-11e9-bf64-0242ac110009-hsn6j no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:53:16.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-d7zbd" for this suite.
Mar 17 11:53:24.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:53:24.928: INFO: namespace: e2e-tests-svcaccounts-d7zbd, resource: bindings, ignored listing per whitelist
Mar 17 11:53:24.932: INFO: namespace e2e-tests-svcaccounts-d7zbd deletion completed in 8.148086085s

• [SLOW TEST:27.715 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:53:24.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Mar 17 11:53:29.126: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-45460a94-48ab-11e9-bf64-0242ac110009,GenerateName:,Namespace:e2e-tests-events-4p75r,SelfLink:/api/v1/namespaces/e2e-tests-events-4p75r/pods/send-events-45460a94-48ab-11e9-bf64-0242ac110009,UID:45488f09-48ab-11e9-a072-fa163e921bae,ResourceVersion:1293233,Generation:0,CreationTimestamp:2019-03-17 11:53:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 33645073,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wjg2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wjg2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wjg2c true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b4500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b4520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:53:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:53:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:53:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:53:25 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.4,StartTime:2019-03-17 11:53:25 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-03-17 11:53:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://5b1e9f895667f951122f623a0c1725011be750f9b1f5a524ea67f69cc7838019}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Mar 17 11:53:31.272: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Mar 17 11:53:33.279: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:53:33.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-4p75r" for this suite.
Mar 17 11:54:13.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:54:13.578: INFO: namespace: e2e-tests-events-4p75r, resource: bindings, ignored listing per whitelist
Mar 17 11:54:13.617: INFO: namespace e2e-tests-events-4p75r deletion completed in 40.16311486s

• [SLOW TEST:48.685 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:54:13.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-626f03e1-48ab-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 11:54:14.021: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-sh947" to be "success or failure"
Mar 17 11:54:14.040: INFO: Pod "pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 19.33484ms
Mar 17 11:54:16.196: INFO: Pod "pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174456732s
Mar 17 11:54:18.533: INFO: Pod "pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.511797042s
Mar 17 11:54:21.136: INFO: Pod "pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 7.114424567s
Mar 17 11:54:23.140: INFO: Pod "pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.119099713s
Mar 17 11:54:25.383: INFO: Pod "pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.361859807s
STEP: Saw pod success
Mar 17 11:54:25.383: INFO: Pod "pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:54:25.394: INFO: Trying to get logs from node kube pod pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Mar 17 11:54:25.415: INFO: Waiting for pod pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009 to disappear
Mar 17 11:54:25.830: INFO: Pod pod-projected-secrets-6276b16c-48ab-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:54:25.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sh947" for this suite.
Mar 17 11:54:35.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:54:35.924: INFO: namespace: e2e-tests-projected-sh947, resource: bindings, ignored listing per whitelist
Mar 17 11:54:35.935: INFO: namespace e2e-tests-projected-sh947 deletion completed in 10.10278174s

• [SLOW TEST:22.317 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:54:35.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Mar 17 11:54:36.151: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix128528767/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:54:36.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-54zqm" for this suite.
Mar 17 11:54:42.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:54:42.330: INFO: namespace: e2e-tests-kubectl-54zqm, resource: bindings, ignored listing per whitelist
Mar 17 11:54:42.351: INFO: namespace e2e-tests-kubectl-54zqm deletion completed in 6.083371127s

• [SLOW TEST:6.415 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:54:42.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Mar 17 11:54:42.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-645sf'
Mar 17 11:54:42.633: INFO: stderr: ""
Mar 17 11:54:42.633: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Mar 17 11:54:43.637: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 11:54:43.637: INFO: Found 0 / 1
Mar 17 11:54:44.637: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 11:54:44.637: INFO: Found 0 / 1
Mar 17 11:54:45.710: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 11:54:45.710: INFO: Found 0 / 1
Mar 17 11:54:46.640: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 11:54:46.640: INFO: Found 0 / 1
Mar 17 11:54:47.637: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 11:54:47.637: INFO: Found 1 / 1
Mar 17 11:54:47.637: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Mar 17 11:54:47.639: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 11:54:47.639: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Mar 17 11:54:47.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f6jtz redis-master --namespace=e2e-tests-kubectl-645sf'
Mar 17 11:54:47.714: INFO: stderr: ""
Mar 17 11:54:47.714: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Mar 11:54:45.489 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Mar 11:54:45.489 # Server started, Redis version 3.2.12\n1:M 17 Mar 11:54:45.489 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Mar 11:54:45.489 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Mar 17 11:54:47.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-f6jtz redis-master --namespace=e2e-tests-kubectl-645sf --tail=1'
Mar 17 11:54:47.798: INFO: stderr: ""
Mar 17 11:54:47.798: INFO: stdout: "1:M 17 Mar 11:54:45.489 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Mar 17 11:54:47.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-f6jtz redis-master --namespace=e2e-tests-kubectl-645sf --limit-bytes=1'
Mar 17 11:54:47.874: INFO: stderr: ""
Mar 17 11:54:47.874: INFO: stdout: " "
STEP: exposing timestamps
Mar 17 11:54:47.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-f6jtz redis-master --namespace=e2e-tests-kubectl-645sf --tail=1 --timestamps'
Mar 17 11:54:47.945: INFO: stderr: ""
Mar 17 11:54:47.945: INFO: stdout: "2019-03-17T11:54:45.490571012Z 1:M 17 Mar 11:54:45.489 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Mar 17 11:54:50.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-f6jtz redis-master --namespace=e2e-tests-kubectl-645sf --since=1s'
Mar 17 11:54:50.537: INFO: stderr: ""
Mar 17 11:54:50.537: INFO: stdout: ""
Mar 17 11:54:50.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-f6jtz redis-master --namespace=e2e-tests-kubectl-645sf --since=24h'
Mar 17 11:54:50.617: INFO: stderr: ""
Mar 17 11:54:50.617: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Mar 11:54:45.489 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Mar 11:54:45.489 # Server started, Redis version 3.2.12\n1:M 17 Mar 11:54:45.489 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Mar 11:54:45.489 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Mar 17 11:54:50.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-645sf'
Mar 17 11:54:50.686: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 11:54:50.686: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Mar 17 11:54:50.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-645sf'
Mar 17 11:54:50.762: INFO: stderr: "No resources found.\n"
Mar 17 11:54:50.762: INFO: stdout: ""
Mar 17 11:54:50.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-645sf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 17 11:54:50.831: INFO: stderr: ""
Mar 17 11:54:50.831: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:54:50.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-645sf" for this suite.
Mar 17 11:54:56.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:54:56.940: INFO: namespace: e2e-tests-kubectl-645sf, resource: bindings, ignored listing per whitelist
Mar 17 11:54:56.949: INFO: namespace e2e-tests-kubectl-645sf deletion completed in 6.115696816s

• [SLOW TEST:14.599 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:54:56.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Mar 17 11:54:57.086: INFO: Waiting up to 5m0s for pod "var-expansion-7c231e0c-48ab-11e9-bf64-0242ac110009" in namespace "e2e-tests-var-expansion-c4jlm" to be "success or failure"
Mar 17 11:54:57.134: INFO: Pod "var-expansion-7c231e0c-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 47.916736ms
Mar 17 11:54:59.136: INFO: Pod "var-expansion-7c231e0c-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050507818s
Mar 17 11:55:01.151: INFO: Pod "var-expansion-7c231e0c-48ab-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064807172s
STEP: Saw pod success
Mar 17 11:55:01.151: INFO: Pod "var-expansion-7c231e0c-48ab-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:55:01.155: INFO: Trying to get logs from node kube pod var-expansion-7c231e0c-48ab-11e9-bf64-0242ac110009 container dapi-container: 
STEP: delete the pod
Mar 17 11:55:01.215: INFO: Waiting for pod var-expansion-7c231e0c-48ab-11e9-bf64-0242ac110009 to disappear
Mar 17 11:55:01.274: INFO: Pod var-expansion-7c231e0c-48ab-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:55:01.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-c4jlm" for this suite.
Mar 17 11:55:07.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:55:07.344: INFO: namespace: e2e-tests-var-expansion-c4jlm, resource: bindings, ignored listing per whitelist
Mar 17 11:55:07.356: INFO: namespace e2e-tests-var-expansion-c4jlm deletion completed in 6.073256941s

• [SLOW TEST:10.407 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:55:07.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-82536521-48ab-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 11:55:07.471: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8253d6a1-48ab-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-gt8cc" to be "success or failure"
Mar 17 11:55:07.479: INFO: Pod "pod-projected-configmaps-8253d6a1-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149574ms
Mar 17 11:55:09.486: INFO: Pod "pod-projected-configmaps-8253d6a1-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014712356s
Mar 17 11:55:11.492: INFO: Pod "pod-projected-configmaps-8253d6a1-48ab-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021143996s
STEP: Saw pod success
Mar 17 11:55:11.492: INFO: Pod "pod-projected-configmaps-8253d6a1-48ab-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:55:11.496: INFO: Trying to get logs from node kube pod pod-projected-configmaps-8253d6a1-48ab-11e9-bf64-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 17 11:55:11.596: INFO: Waiting for pod pod-projected-configmaps-8253d6a1-48ab-11e9-bf64-0242ac110009 to disappear
Mar 17 11:55:11.603: INFO: Pod pod-projected-configmaps-8253d6a1-48ab-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:55:11.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gt8cc" for this suite.
Mar 17 11:55:17.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:55:17.665: INFO: namespace: e2e-tests-projected-gt8cc, resource: bindings, ignored listing per whitelist
Mar 17 11:55:17.715: INFO: namespace e2e-tests-projected-gt8cc deletion completed in 6.108030609s

• [SLOW TEST:10.359 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:55:17.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-lrbz
STEP: Creating a pod to test atomic-volume-subpath
Mar 17 11:55:18.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lrbz" in namespace "e2e-tests-subpath-ddmdx" to be "success or failure"
Mar 17 11:55:18.104: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.814525ms
Mar 17 11:55:20.108: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013431357s
Mar 17 11:55:22.246: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151464029s
Mar 17 11:55:24.249: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 6.154789257s
Mar 17 11:55:26.253: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 8.158473218s
Mar 17 11:55:28.256: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 10.161802142s
Mar 17 11:55:30.259: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 12.164969894s
Mar 17 11:55:32.303: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 14.208902805s
Mar 17 11:55:34.308: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 16.213851224s
Mar 17 11:55:36.311: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 18.216987467s
Mar 17 11:55:38.323: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 20.2290495s
Mar 17 11:55:40.964: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Running", Reason="", readiness=false. Elapsed: 22.869815079s
Mar 17 11:55:43.046: INFO: Pod "pod-subpath-test-projected-lrbz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.95146652s
STEP: Saw pod success
Mar 17 11:55:43.046: INFO: Pod "pod-subpath-test-projected-lrbz" satisfied condition "success or failure"
Mar 17 11:55:43.085: INFO: Trying to get logs from node kube pod pod-subpath-test-projected-lrbz container test-container-subpath-projected-lrbz: 
STEP: delete the pod
Mar 17 11:55:43.864: INFO: Waiting for pod pod-subpath-test-projected-lrbz to disappear
Mar 17 11:55:43.876: INFO: Pod pod-subpath-test-projected-lrbz no longer exists
STEP: Deleting pod pod-subpath-test-projected-lrbz
Mar 17 11:55:43.876: INFO: Deleting pod "pod-subpath-test-projected-lrbz" in namespace "e2e-tests-subpath-ddmdx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:55:43.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-ddmdx" for this suite.
Mar 17 11:55:50.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:55:50.169: INFO: namespace: e2e-tests-subpath-ddmdx, resource: bindings, ignored listing per whitelist
Mar 17 11:55:50.224: INFO: namespace e2e-tests-subpath-ddmdx deletion completed in 6.336576934s

• [SLOW TEST:32.509 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:55:50.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Mar 17 11:55:50.387: INFO: Waiting up to 5m0s for pod "client-containers-9be892d0-48ab-11e9-bf64-0242ac110009" in namespace "e2e-tests-containers-fq56q" to be "success or failure"
Mar 17 11:55:50.400: INFO: Pod "client-containers-9be892d0-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.007535ms
Mar 17 11:55:52.416: INFO: Pod "client-containers-9be892d0-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029232651s
Mar 17 11:55:54.759: INFO: Pod "client-containers-9be892d0-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371619485s
Mar 17 11:55:56.886: INFO: Pod "client-containers-9be892d0-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498747009s
Mar 17 11:55:58.889: INFO: Pod "client-containers-9be892d0-48ab-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.501672316s
STEP: Saw pod success
Mar 17 11:55:58.889: INFO: Pod "client-containers-9be892d0-48ab-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:55:58.891: INFO: Trying to get logs from node kube pod client-containers-9be892d0-48ab-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 11:55:58.953: INFO: Waiting for pod client-containers-9be892d0-48ab-11e9-bf64-0242ac110009 to disappear
Mar 17 11:55:58.958: INFO: Pod client-containers-9be892d0-48ab-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:55:58.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-fq56q" for this suite.
Mar 17 11:56:08.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:56:08.997: INFO: namespace: e2e-tests-containers-fq56q, resource: bindings, ignored listing per whitelist
Mar 17 11:56:09.054: INFO: namespace e2e-tests-containers-fq56q deletion completed in 10.091608282s

• [SLOW TEST:18.829 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:56:09.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Mar 17 11:56:17.323: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 17 11:56:17.335: INFO: Pod pod-with-prestop-http-hook still exists
Mar 17 11:56:19.335: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 17 11:56:19.339: INFO: Pod pod-with-prestop-http-hook still exists
Mar 17 11:56:21.335: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 17 11:56:21.339: INFO: Pod pod-with-prestop-http-hook still exists
Mar 17 11:56:23.335: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 17 11:56:23.338: INFO: Pod pod-with-prestop-http-hook still exists
Mar 17 11:56:25.335: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 17 11:56:25.340: INFO: Pod pod-with-prestop-http-hook still exists
Mar 17 11:56:27.335: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 17 11:56:27.338: INFO: Pod pod-with-prestop-http-hook still exists
Mar 17 11:56:29.335: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 17 11:56:29.338: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:56:29.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gr5lj" for this suite.
Mar 17 11:56:53.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:56:53.403: INFO: namespace: e2e-tests-container-lifecycle-hook-gr5lj, resource: bindings, ignored listing per whitelist
Mar 17 11:56:53.443: INFO: namespace e2e-tests-container-lifecycle-hook-gr5lj deletion completed in 24.094525661s

• [SLOW TEST:44.389 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:56:53.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:56:59.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-m78dj" for this suite.
Mar 17 11:57:05.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:57:05.887: INFO: namespace: e2e-tests-emptydir-wrapper-m78dj, resource: bindings, ignored listing per whitelist
Mar 17 11:57:05.919: INFO: namespace e2e-tests-emptydir-wrapper-m78dj deletion completed in 6.144961872s

• [SLOW TEST:12.476 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:57:05.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Mar 17 11:57:16.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:16.400: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:18.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:18.406: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:20.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:20.406: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:22.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:22.405: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:24.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:24.404: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:26.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:26.403: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:28.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:28.405: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:30.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:30.405: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:32.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:32.403: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:34.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:34.404: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:36.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:36.406: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 17 11:57:38.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 17 11:57:38.404: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:57:38.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mnhq7" for this suite.
Mar 17 11:58:00.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:58:00.507: INFO: namespace: e2e-tests-container-lifecycle-hook-mnhq7, resource: bindings, ignored listing per whitelist
Mar 17 11:58:00.571: INFO: namespace e2e-tests-container-lifecycle-hook-mnhq7 deletion completed in 22.151467601s

• [SLOW TEST:54.652 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:58:00.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 11:58:00.736: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Mar 17 11:58:05.739: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 17 11:58:05.739: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Mar 17 11:58:05.758: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-49m2h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-49m2h/deployments/test-cleanup-deployment,UID:ec9808eb-48ab-11e9-a072-fa163e921bae,ResourceVersion:1293892,Generation:1,CreationTimestamp:2019-03-17 11:58:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Mar 17 11:58:05.779: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Mar 17 11:58:05.779: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Mar 17 11:58:05.779: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-49m2h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-49m2h/replicasets/test-cleanup-controller,UID:e9935e7d-48ab-11e9-a072-fa163e921bae,ResourceVersion:1293893,Generation:1,CreationTimestamp:2019-03-17 11:58:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ec9808eb-48ab-11e9-a072-fa163e921bae 0xc0010b7bc7 0xc0010b7bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Mar 17 11:58:05.782: INFO: Pod "test-cleanup-controller-jphxv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-jphxv,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-49m2h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49m2h/pods/test-cleanup-controller-jphxv,UID:e99cf484-48ab-11e9-a072-fa163e921bae,ResourceVersion:1293890,Generation:0,CreationTimestamp:2019-03-17 11:58:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller e9935e7d-48ab-11e9-a072-fa163e921bae 0xc0016a6617 0xc0016a6618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4pqp9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4pqp9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4pqp9 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016a66a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016a6870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:58:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:58:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:58:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 11:58:00 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.4,StartTime:2019-03-17 11:58:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-03-17 11:58:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d docker://f581dd1fb2262e7760e51a0dd669f805d0db6ae23998c7daed5739721b9158b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:58:05.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-49m2h" for this suite.
Mar 17 11:58:11.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:58:11.916: INFO: namespace: e2e-tests-deployment-49m2h, resource: bindings, ignored listing per whitelist
Mar 17 11:58:11.923: INFO: namespace e2e-tests-deployment-49m2h deletion completed in 6.137510289s

• [SLOW TEST:11.352 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:58:11.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f04e8ebc-48ab-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 11:58:11.989: INFO: Waiting up to 5m0s for pod "pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-f7xg6" to be "success or failure"
Mar 17 11:58:11.992: INFO: Pod "pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.61129ms
Mar 17 11:58:13.995: INFO: Pod "pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005977199s
Mar 17 11:58:15.999: INFO: Pod "pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009242974s
Mar 17 11:58:18.002: INFO: Pod "pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012773311s
STEP: Saw pod success
Mar 17 11:58:18.002: INFO: Pod "pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:58:18.005: INFO: Trying to get logs from node kube pod pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Mar 17 11:58:18.127: INFO: Waiting for pod pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009 to disappear
Mar 17 11:58:18.130: INFO: Pod pod-secrets-f04edb4b-48ab-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:58:18.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-f7xg6" for this suite.
Mar 17 11:58:24.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:58:24.195: INFO: namespace: e2e-tests-secrets-f7xg6, resource: bindings, ignored listing per whitelist
Mar 17 11:58:24.456: INFO: namespace e2e-tests-secrets-f7xg6 deletion completed in 6.323858711s

• [SLOW TEST:12.534 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:58:24.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:58:28.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-rffg5" for this suite.
Mar 17 11:59:08.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:59:08.782: INFO: namespace: e2e-tests-kubelet-test-rffg5, resource: bindings, ignored listing per whitelist
Mar 17 11:59:08.843: INFO: namespace e2e-tests-kubelet-test-rffg5 deletion completed in 40.083627509s

• [SLOW TEST:44.387 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:59:08.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 17 11:59:09.110: INFO: Waiting up to 5m0s for pod "pod-125947b0-48ac-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-vs9qq" to be "success or failure"
Mar 17 11:59:09.114: INFO: Pod "pod-125947b0-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397405ms
Mar 17 11:59:11.126: INFO: Pod "pod-125947b0-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016342613s
Mar 17 11:59:13.182: INFO: Pod "pod-125947b0-48ac-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072140644s
STEP: Saw pod success
Mar 17 11:59:13.182: INFO: Pod "pod-125947b0-48ac-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:59:13.185: INFO: Trying to get logs from node kube pod pod-125947b0-48ac-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 11:59:13.235: INFO: Waiting for pod pod-125947b0-48ac-11e9-bf64-0242ac110009 to disappear
Mar 17 11:59:13.257: INFO: Pod pod-125947b0-48ac-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:59:13.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vs9qq" for this suite.
Mar 17 11:59:21.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:59:21.667: INFO: namespace: e2e-tests-emptydir-vs9qq, resource: bindings, ignored listing per whitelist
Mar 17 11:59:21.694: INFO: namespace e2e-tests-emptydir-vs9qq deletion completed in 8.431537052s

• [SLOW TEST:12.851 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:59:21.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 11:59:22.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-2ddtp" to be "success or failure"
Mar 17 11:59:22.348: INFO: Pod "downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 344.59681ms
Mar 17 11:59:24.444: INFO: Pod "downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440010201s
Mar 17 11:59:26.447: INFO: Pod "downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443066191s
Mar 17 11:59:28.449: INFO: Pod "downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445750464s
Mar 17 11:59:30.497: INFO: Pod "downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.493878153s
Mar 17 11:59:32.500: INFO: Pod "downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.496763595s
STEP: Saw pod success
Mar 17 11:59:32.500: INFO: Pod "downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 11:59:32.502: INFO: Trying to get logs from node kube pod downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 11:59:32.853: INFO: Waiting for pod downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009 to disappear
Mar 17 11:59:32.875: INFO: Pod downwardapi-volume-1a0935d2-48ac-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:59:32.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2ddtp" for this suite.
Mar 17 11:59:38.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:59:39.056: INFO: namespace: e2e-tests-downward-api-2ddtp, resource: bindings, ignored listing per whitelist
Mar 17 11:59:39.079: INFO: namespace e2e-tests-downward-api-2ddtp deletion completed in 6.197502525s

• [SLOW TEST:17.384 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:59:39.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Mar 17 11:59:39.307: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 17 11:59:40.815: INFO: Waiting for terminating namespaces to be deleted...
Mar 17 11:59:40.818: INFO: 
Logging pods the kubelet thinks is on node kube before test
Mar 17 11:59:40.826: INFO: etcd-kube from kube-system started at  (0 container statuses recorded)
Mar 17 11:59:40.826: INFO: kube-apiserver-kube from kube-system started at  (0 container statuses recorded)
Mar 17 11:59:40.826: INFO: kube-proxy-6jlw8 from kube-system started at 2019-03-09 11:38:22 +0000 UTC (1 container statuses recorded)
Mar 17 11:59:40.826: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 17 11:59:40.826: INFO: kube-scheduler-kube from kube-system started at  (0 container statuses recorded)
Mar 17 11:59:40.826: INFO: weave-net-47d2b from kube-system started at 2019-03-09 11:38:24 +0000 UTC (2 container statuses recorded)
Mar 17 11:59:40.826: INFO: 	Container weave ready: true, restart count 0
Mar 17 11:59:40.826: INFO: 	Container weave-npc ready: true, restart count 0
Mar 17 11:59:40.826: INFO: coredns-86c58d9df4-lrf5x from kube-system started at 2019-03-09 11:38:41 +0000 UTC (1 container statuses recorded)
Mar 17 11:59:40.826: INFO: 	Container coredns ready: true, restart count 0
Mar 17 11:59:40.826: INFO: coredns-86c58d9df4-xv8sl from kube-system started at 2019-03-09 11:38:41 +0000 UTC (1 container statuses recorded)
Mar 17 11:59:40.826: INFO: 	Container coredns ready: true, restart count 0
Mar 17 11:59:40.826: INFO: kube-controller-manager-kube from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node kube
Mar 17 11:59:41.093: INFO: Pod coredns-86c58d9df4-lrf5x requesting resource cpu=100m on Node kube
Mar 17 11:59:41.093: INFO: Pod coredns-86c58d9df4-xv8sl requesting resource cpu=100m on Node kube
Mar 17 11:59:41.093: INFO: Pod etcd-kube requesting resource cpu=0m on Node kube
Mar 17 11:59:41.093: INFO: Pod kube-apiserver-kube requesting resource cpu=250m on Node kube
Mar 17 11:59:41.093: INFO: Pod kube-controller-manager-kube requesting resource cpu=200m on Node kube
Mar 17 11:59:41.093: INFO: Pod kube-proxy-6jlw8 requesting resource cpu=0m on Node kube
Mar 17 11:59:41.093: INFO: Pod kube-scheduler-kube requesting resource cpu=100m on Node kube
Mar 17 11:59:41.093: INFO: Pod weave-net-47d2b requesting resource cpu=20m on Node kube
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-256c2817-48ac-11e9-bf64-0242ac110009.158cbd2f047cb185], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-8jrb8/filler-pod-256c2817-48ac-11e9-bf64-0242ac110009 to kube]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-256c2817-48ac-11e9-bf64-0242ac110009.158cbd2f6c046f2b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-256c2817-48ac-11e9-bf64-0242ac110009.158cbd2faf166a7c], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-256c2817-48ac-11e9-bf64-0242ac110009.158cbd30074319eb], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.158cbd306af4c648], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node kube
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 11:59:48.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-8jrb8" for this suite.
Mar 17 11:59:54.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 11:59:54.456: INFO: namespace: e2e-tests-sched-pred-8jrb8, resource: bindings, ignored listing per whitelist
Mar 17 11:59:54.460: INFO: namespace e2e-tests-sched-pred-8jrb8 deletion completed in 6.102463376s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:15.381 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 11:59:54.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-2d834a80-48ac-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 11:59:54.825: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-8qxtb" to be "success or failure"
Mar 17 11:59:54.886: INFO: Pod "pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 60.499992ms
Mar 17 11:59:56.890: INFO: Pod "pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064999415s
Mar 17 11:59:58.894: INFO: Pod "pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068696797s
Mar 17 12:00:00.899: INFO: Pod "pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073938746s
STEP: Saw pod success
Mar 17 12:00:00.899: INFO: Pod "pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:00:00.903: INFO: Trying to get logs from node kube pod pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 17 12:00:01.322: INFO: Waiting for pod pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009 to disappear
Mar 17 12:00:01.381: INFO: Pod pod-projected-configmaps-2d874d4c-48ac-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:00:01.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8qxtb" for this suite.
Mar 17 12:00:07.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:00:07.669: INFO: namespace: e2e-tests-projected-8qxtb, resource: bindings, ignored listing per whitelist
Mar 17 12:00:07.881: INFO: namespace e2e-tests-projected-8qxtb deletion completed in 6.254194782s

• [SLOW TEST:13.421 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:00:07.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:00:08.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-gb9lb" for this suite.
Mar 17 12:00:14.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:00:14.128: INFO: namespace: e2e-tests-services-gb9lb, resource: bindings, ignored listing per whitelist
Mar 17 12:00:14.132: INFO: namespace e2e-tests-services-gb9lb deletion completed in 6.107816895s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.250 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:00:14.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-3935e0a6-48ac-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 12:00:14.336: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-393696f6-48ac-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-cglj4" to be "success or failure"
Mar 17 12:00:14.348: INFO: Pod "pod-projected-secrets-393696f6-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.90453ms
Mar 17 12:00:16.351: INFO: Pod "pod-projected-secrets-393696f6-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014845837s
Mar 17 12:00:18.355: INFO: Pod "pod-projected-secrets-393696f6-48ac-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018671838s
STEP: Saw pod success
Mar 17 12:00:18.355: INFO: Pod "pod-projected-secrets-393696f6-48ac-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:00:18.358: INFO: Trying to get logs from node kube pod pod-projected-secrets-393696f6-48ac-11e9-bf64-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Mar 17 12:00:18.395: INFO: Waiting for pod pod-projected-secrets-393696f6-48ac-11e9-bf64-0242ac110009 to disappear
Mar 17 12:00:18.524: INFO: Pod pod-projected-secrets-393696f6-48ac-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:00:18.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cglj4" for this suite.
Mar 17 12:00:24.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:00:24.582: INFO: namespace: e2e-tests-projected-cglj4, resource: bindings, ignored listing per whitelist
Mar 17 12:00:24.647: INFO: namespace e2e-tests-projected-cglj4 deletion completed in 6.12048613s

• [SLOW TEST:10.515 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:00:24.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Mar 17 12:00:25.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:28.846: INFO: stderr: ""
Mar 17 12:00:28.846: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 17 12:00:28.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:29.083: INFO: stderr: ""
Mar 17 12:00:29.083: INFO: stdout: "update-demo-nautilus-8fdmq update-demo-nautilus-wszbp "
Mar 17 12:00:29.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fdmq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:29.282: INFO: stderr: ""
Mar 17 12:00:29.282: INFO: stdout: ""
Mar 17 12:00:29.282: INFO: update-demo-nautilus-8fdmq is created but not running
Mar 17 12:00:34.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:34.377: INFO: stderr: ""
Mar 17 12:00:34.377: INFO: stdout: "update-demo-nautilus-8fdmq update-demo-nautilus-wszbp "
Mar 17 12:00:34.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fdmq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:34.478: INFO: stderr: ""
Mar 17 12:00:34.478: INFO: stdout: "true"
Mar 17 12:00:34.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fdmq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:34.566: INFO: stderr: ""
Mar 17 12:00:34.567: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 17 12:00:34.567: INFO: validating pod update-demo-nautilus-8fdmq
Mar 17 12:00:34.571: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 17 12:00:34.571: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 17 12:00:34.571: INFO: update-demo-nautilus-8fdmq is verified up and running
Mar 17 12:00:34.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wszbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:34.638: INFO: stderr: ""
Mar 17 12:00:34.638: INFO: stdout: "true"
Mar 17 12:00:34.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wszbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:34.703: INFO: stderr: ""
Mar 17 12:00:34.703: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 17 12:00:34.703: INFO: validating pod update-demo-nautilus-wszbp
Mar 17 12:00:34.707: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 17 12:00:34.707: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 17 12:00:34.707: INFO: update-demo-nautilus-wszbp is verified up and running
STEP: rolling-update to new replication controller
Mar 17 12:00:34.708: INFO: scanned /root for discovery docs: 
Mar 17 12:00:34.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:57.326: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Mar 17 12:00:57.326: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 17 12:00:57.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:57.409: INFO: stderr: ""
Mar 17 12:00:57.409: INFO: stdout: "update-demo-kitten-h5v2l update-demo-kitten-krjrs "
Mar 17 12:00:57.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h5v2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:57.490: INFO: stderr: ""
Mar 17 12:00:57.490: INFO: stdout: "true"
Mar 17 12:00:57.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h5v2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:57.561: INFO: stderr: ""
Mar 17 12:00:57.561: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Mar 17 12:00:57.561: INFO: validating pod update-demo-kitten-h5v2l
Mar 17 12:00:57.575: INFO: got data: {
  "image": "kitten.jpg"
}

Mar 17 12:00:57.575: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar 17 12:00:57.575: INFO: update-demo-kitten-h5v2l is verified up and running
Mar 17 12:00:57.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-krjrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:57.669: INFO: stderr: ""
Mar 17 12:00:57.669: INFO: stdout: "true"
Mar 17 12:00:57.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-krjrs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z56qn'
Mar 17 12:00:57.756: INFO: stderr: ""
Mar 17 12:00:57.756: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Mar 17 12:00:57.756: INFO: validating pod update-demo-kitten-krjrs
Mar 17 12:00:57.765: INFO: got data: {
  "image": "kitten.jpg"
}

Mar 17 12:00:57.765: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar 17 12:00:57.765: INFO: update-demo-kitten-krjrs is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:00:57.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-z56qn" for this suite.
Mar 17 12:01:21.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:01:22.030: INFO: namespace: e2e-tests-kubectl-z56qn, resource: bindings, ignored listing per whitelist
Mar 17 12:01:22.030: INFO: namespace e2e-tests-kubectl-z56qn deletion completed in 24.261535204s

• [SLOW TEST:57.383 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:01:22.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-r25df
Mar 17 12:01:26.800: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-r25df
STEP: checking the pod's current state and verifying that restartCount is present
Mar 17 12:01:26.803: INFO: Initial restart count of pod liveness-http is 0
Mar 17 12:01:38.821: INFO: Restart count of pod e2e-tests-container-probe-r25df/liveness-http is now 1 (12.018590172s elapsed)
Mar 17 12:01:58.860: INFO: Restart count of pod e2e-tests-container-probe-r25df/liveness-http is now 2 (32.057041091s elapsed)
Mar 17 12:02:18.892: INFO: Restart count of pod e2e-tests-container-probe-r25df/liveness-http is now 3 (52.089186064s elapsed)
Mar 17 12:02:38.928: INFO: Restart count of pod e2e-tests-container-probe-r25df/liveness-http is now 4 (1m12.12546942s elapsed)
Mar 17 12:03:51.103: INFO: Restart count of pod e2e-tests-container-probe-r25df/liveness-http is now 5 (2m24.300575872s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:03:51.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-r25df" for this suite.
Mar 17 12:03:57.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:03:57.246: INFO: namespace: e2e-tests-container-probe-r25df, resource: bindings, ignored listing per whitelist
Mar 17 12:03:57.265: INFO: namespace e2e-tests-container-probe-r25df deletion completed in 6.081139893s

• [SLOW TEST:155.234 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:03:57.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 12:03:57.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-b7cw6" to be "success or failure"
Mar 17 12:03:57.383: INFO: Pod "downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.92476ms
Mar 17 12:03:59.386: INFO: Pod "downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010599236s
Mar 17 12:04:01.389: INFO: Pod "downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013653458s
Mar 17 12:04:03.393: INFO: Pod "downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016957257s
STEP: Saw pod success
Mar 17 12:04:03.393: INFO: Pod "downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:04:03.395: INFO: Trying to get logs from node kube pod downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 12:04:03.571: INFO: Waiting for pod downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009 to disappear
Mar 17 12:04:03.573: INFO: Pod downwardapi-volume-be27916e-48ac-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:04:03.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b7cw6" for this suite.
Mar 17 12:04:09.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:04:09.641: INFO: namespace: e2e-tests-projected-b7cw6, resource: bindings, ignored listing per whitelist
Mar 17 12:04:09.662: INFO: namespace e2e-tests-projected-b7cw6 deletion completed in 6.086169886s

• [SLOW TEST:12.397 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:04:09.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c58f30e1-48ac-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 12:04:09.878: INFO: Waiting up to 5m0s for pod "pod-secrets-c5a0d58f-48ac-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-t9rsc" to be "success or failure"
Mar 17 12:04:09.891: INFO: Pod "pod-secrets-c5a0d58f-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.022931ms
Mar 17 12:04:11.894: INFO: Pod "pod-secrets-c5a0d58f-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016431329s
Mar 17 12:04:13.897: INFO: Pod "pod-secrets-c5a0d58f-48ac-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019536041s
STEP: Saw pod success
Mar 17 12:04:13.897: INFO: Pod "pod-secrets-c5a0d58f-48ac-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:04:13.906: INFO: Trying to get logs from node kube pod pod-secrets-c5a0d58f-48ac-11e9-bf64-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Mar 17 12:04:13.933: INFO: Waiting for pod pod-secrets-c5a0d58f-48ac-11e9-bf64-0242ac110009 to disappear
Mar 17 12:04:13.940: INFO: Pod pod-secrets-c5a0d58f-48ac-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:04:13.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-t9rsc" for this suite.
Mar 17 12:04:19.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:04:20.045: INFO: namespace: e2e-tests-secrets-t9rsc, resource: bindings, ignored listing per whitelist
Mar 17 12:04:20.089: INFO: namespace e2e-tests-secrets-t9rsc deletion completed in 6.146064726s

• [SLOW TEST:10.427 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:04:20.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-cbc5f4f6-48ac-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 12:04:20.200: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cbc715f7-48ac-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-x76l9" to be "success or failure"
Mar 17 12:04:20.274: INFO: Pod "pod-projected-secrets-cbc715f7-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 74.633542ms
Mar 17 12:04:22.388: INFO: Pod "pod-projected-secrets-cbc715f7-48ac-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18826488s
Mar 17 12:04:24.392: INFO: Pod "pod-projected-secrets-cbc715f7-48ac-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.192184214s
STEP: Saw pod success
Mar 17 12:04:24.392: INFO: Pod "pod-projected-secrets-cbc715f7-48ac-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:04:24.394: INFO: Trying to get logs from node kube pod pod-projected-secrets-cbc715f7-48ac-11e9-bf64-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Mar 17 12:04:24.496: INFO: Waiting for pod pod-projected-secrets-cbc715f7-48ac-11e9-bf64-0242ac110009 to disappear
Mar 17 12:04:24.576: INFO: Pod pod-projected-secrets-cbc715f7-48ac-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:04:24.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-x76l9" for this suite.
Mar 17 12:04:30.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:04:30.660: INFO: namespace: e2e-tests-projected-x76l9, resource: bindings, ignored listing per whitelist
Mar 17 12:04:30.699: INFO: namespace e2e-tests-projected-x76l9 deletion completed in 6.116516722s

• [SLOW TEST:10.610 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:04:30.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Mar 17 12:04:30.868: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-a,UID:d21de7ba-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294867,Generation:0,CreationTimestamp:2019-03-17 12:04:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar 17 12:04:30.869: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-a,UID:d21de7ba-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294867,Generation:0,CreationTimestamp:2019-03-17 12:04:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Mar 17 12:04:40.876: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-a,UID:d21de7ba-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294880,Generation:0,CreationTimestamp:2019-03-17 12:04:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Mar 17 12:04:40.876: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-a,UID:d21de7ba-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294880,Generation:0,CreationTimestamp:2019-03-17 12:04:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Mar 17 12:04:50.884: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-a,UID:d21de7ba-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294893,Generation:0,CreationTimestamp:2019-03-17 12:04:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar 17 12:04:50.884: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-a,UID:d21de7ba-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294893,Generation:0,CreationTimestamp:2019-03-17 12:04:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Mar 17 12:05:00.892: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-a,UID:d21de7ba-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294906,Generation:0,CreationTimestamp:2019-03-17 12:04:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar 17 12:05:00.892: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-a,UID:d21de7ba-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294906,Generation:0,CreationTimestamp:2019-03-17 12:04:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Mar 17 12:05:10.899: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-b,UID:ea00f247-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294919,Generation:0,CreationTimestamp:2019-03-17 12:05:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar 17 12:05:10.899: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-b,UID:ea00f247-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294919,Generation:0,CreationTimestamp:2019-03-17 12:05:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Mar 17 12:05:20.913: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-b,UID:ea00f247-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294932,Generation:0,CreationTimestamp:2019-03-17 12:05:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar 17 12:05:20.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9xs7h,SelfLink:/api/v1/namespaces/e2e-tests-watch-9xs7h/configmaps/e2e-watch-test-configmap-b,UID:ea00f247-48ac-11e9-a072-fa163e921bae,ResourceVersion:1294932,Generation:0,CreationTimestamp:2019-03-17 12:05:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:05:30.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9xs7h" for this suite.
Mar 17 12:05:36.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:05:36.944: INFO: namespace: e2e-tests-watch-9xs7h, resource: bindings, ignored listing per whitelist
Mar 17 12:05:36.983: INFO: namespace e2e-tests-watch-9xs7h deletion completed in 6.065659313s

• [SLOW TEST:66.284 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:05:36.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:05:37.483: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f9ad7e39-48ac-11e9-a072-fa163e921bae", Controller:(*bool)(0xc0019750c2), BlockOwnerDeletion:(*bool)(0xc0019750c3)}}
Mar 17 12:05:37.636: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f9aaf968-48ac-11e9-a072-fa163e921bae", Controller:(*bool)(0xc0005893c2), BlockOwnerDeletion:(*bool)(0xc0005893c3)}}
Mar 17 12:05:37.662: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f9abddc3-48ac-11e9-a072-fa163e921bae", Controller:(*bool)(0xc001975792), BlockOwnerDeletion:(*bool)(0xc001975793)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:05:42.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-9r6hh" for this suite.
Mar 17 12:05:48.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:05:48.793: INFO: namespace: e2e-tests-gc-9r6hh, resource: bindings, ignored listing per whitelist
Mar 17 12:05:48.869: INFO: namespace e2e-tests-gc-9r6hh deletion completed in 6.119939689s

• [SLOW TEST:11.886 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:05:48.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:05:49.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-p4x2r" for this suite.
Mar 17 12:06:11.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:06:11.157: INFO: namespace: e2e-tests-kubelet-test-p4x2r, resource: bindings, ignored listing per whitelist
Mar 17 12:06:11.251: INFO: namespace e2e-tests-kubelet-test-p4x2r deletion completed in 22.198394795s

• [SLOW TEST:22.382 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:06:11.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-hntlb
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Mar 17 12:06:11.584: INFO: Found 0 stateful pods, waiting for 3
Mar 17 12:06:21.599: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:06:21.599: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:06:21.599: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Mar 17 12:06:31.588: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:06:31.588: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:06:31.588: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:06:31.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hntlb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Mar 17 12:06:31.820: INFO: stderr: ""
Mar 17 12:06:31.820: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Mar 17 12:06:31.820: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Mar 17 12:06:41.864: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Mar 17 12:06:51.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hntlb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Mar 17 12:06:52.148: INFO: stderr: ""
Mar 17 12:06:52.148: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Mar 17 12:06:52.148: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Mar 17 12:07:12.174: INFO: Waiting for StatefulSet e2e-tests-statefulset-hntlb/ss2 to complete update
Mar 17 12:07:12.174: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666
Mar 17 12:07:22.182: INFO: Waiting for StatefulSet e2e-tests-statefulset-hntlb/ss2 to complete update
STEP: Rolling back to a previous revision
Mar 17 12:07:32.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hntlb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Mar 17 12:07:32.431: INFO: stderr: ""
Mar 17 12:07:32.431: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Mar 17 12:07:32.431: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Mar 17 12:07:42.459: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Mar 17 12:07:52.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hntlb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Mar 17 12:07:52.845: INFO: stderr: ""
Mar 17 12:07:52.845: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Mar 17 12:07:52.845: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Mar 17 12:08:03.022: INFO: Waiting for StatefulSet e2e-tests-statefulset-hntlb/ss2 to complete update
Mar 17 12:08:03.022: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9
Mar 17 12:08:03.022: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-1 to have revision ss2-787997d666 update revision ss2-c79899b9
Mar 17 12:08:03.022: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-2 to have revision ss2-787997d666 update revision ss2-c79899b9
Mar 17 12:08:13.042: INFO: Waiting for StatefulSet e2e-tests-statefulset-hntlb/ss2 to complete update
Mar 17 12:08:13.042: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9
Mar 17 12:08:13.042: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-1 to have revision ss2-787997d666 update revision ss2-c79899b9
Mar 17 12:08:23.029: INFO: Waiting for StatefulSet e2e-tests-statefulset-hntlb/ss2 to complete update
Mar 17 12:08:23.029: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9
Mar 17 12:08:23.029: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-1 to have revision ss2-787997d666 update revision ss2-c79899b9
Mar 17 12:08:33.033: INFO: Waiting for StatefulSet e2e-tests-statefulset-hntlb/ss2 to complete update
Mar 17 12:08:33.033: INFO: Waiting for Pod e2e-tests-statefulset-hntlb/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Mar 17 12:08:43.028: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hntlb
Mar 17 12:08:43.030: INFO: Scaling statefulset ss2 to 0
Mar 17 12:09:13.048: INFO: Waiting for statefulset status.replicas updated to 0
Mar 17 12:09:13.050: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:09:13.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-hntlb" for this suite.
Mar 17 12:09:21.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:09:21.394: INFO: namespace: e2e-tests-statefulset-hntlb, resource: bindings, ignored listing per whitelist
Mar 17 12:09:21.477: INFO: namespace e2e-tests-statefulset-hntlb deletion completed in 8.407508656s

• [SLOW TEST:190.226 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:09:21.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 17 12:09:21.659: INFO: Waiting up to 5m0s for pod "pod-7f766d75-48ad-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-pfr8k" to be "success or failure"
Mar 17 12:09:21.699: INFO: Pod "pod-7f766d75-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 39.89107ms
Mar 17 12:09:23.702: INFO: Pod "pod-7f766d75-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043076547s
Mar 17 12:09:25.776: INFO: Pod "pod-7f766d75-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116464572s
Mar 17 12:09:27.778: INFO: Pod "pod-7f766d75-48ad-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118903569s
STEP: Saw pod success
Mar 17 12:09:27.778: INFO: Pod "pod-7f766d75-48ad-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:09:27.779: INFO: Trying to get logs from node kube pod pod-7f766d75-48ad-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:09:27.817: INFO: Waiting for pod pod-7f766d75-48ad-11e9-bf64-0242ac110009 to disappear
Mar 17 12:09:27.831: INFO: Pod pod-7f766d75-48ad-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:09:27.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pfr8k" for this suite.
Mar 17 12:09:33.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:09:33.904: INFO: namespace: e2e-tests-emptydir-pfr8k, resource: bindings, ignored listing per whitelist
Mar 17 12:09:33.947: INFO: namespace e2e-tests-emptydir-pfr8k deletion completed in 6.111191692s

• [SLOW TEST:12.470 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:09:33.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 17 12:09:34.435: INFO: Number of nodes with available pods: 0
Mar 17 12:09:34.435: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:35.449: INFO: Number of nodes with available pods: 0
Mar 17 12:09:35.449: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:36.446: INFO: Number of nodes with available pods: 0
Mar 17 12:09:36.446: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:37.676: INFO: Number of nodes with available pods: 0
Mar 17 12:09:37.676: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:38.441: INFO: Number of nodes with available pods: 0
Mar 17 12:09:38.441: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:39.445: INFO: Number of nodes with available pods: 0
Mar 17 12:09:39.445: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:41.111: INFO: Number of nodes with available pods: 0
Mar 17 12:09:41.111: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:41.442: INFO: Number of nodes with available pods: 0
Mar 17 12:09:41.442: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:42.450: INFO: Number of nodes with available pods: 1
Mar 17 12:09:42.450: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Mar 17 12:09:42.483: INFO: Number of nodes with available pods: 0
Mar 17 12:09:42.483: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:43.493: INFO: Number of nodes with available pods: 0
Mar 17 12:09:43.493: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:44.489: INFO: Number of nodes with available pods: 0
Mar 17 12:09:44.489: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:45.489: INFO: Number of nodes with available pods: 0
Mar 17 12:09:45.489: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:46.492: INFO: Number of nodes with available pods: 0
Mar 17 12:09:46.492: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:47.490: INFO: Number of nodes with available pods: 0
Mar 17 12:09:47.490: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:48.489: INFO: Number of nodes with available pods: 0
Mar 17 12:09:48.489: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:49.489: INFO: Number of nodes with available pods: 0
Mar 17 12:09:49.489: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:50.488: INFO: Number of nodes with available pods: 0
Mar 17 12:09:50.488: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:52.223: INFO: Number of nodes with available pods: 0
Mar 17 12:09:52.223: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:52.490: INFO: Number of nodes with available pods: 0
Mar 17 12:09:52.490: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:53.498: INFO: Number of nodes with available pods: 0
Mar 17 12:09:53.498: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:54.635: INFO: Number of nodes with available pods: 0
Mar 17 12:09:54.635: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:55.488: INFO: Number of nodes with available pods: 0
Mar 17 12:09:55.488: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:56.490: INFO: Number of nodes with available pods: 0
Mar 17 12:09:56.490: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:57.492: INFO: Number of nodes with available pods: 0
Mar 17 12:09:57.492: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:59.241: INFO: Number of nodes with available pods: 0
Mar 17 12:09:59.241: INFO: Node kube is running more than one daemon pod
Mar 17 12:09:59.634: INFO: Number of nodes with available pods: 0
Mar 17 12:09:59.634: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:00.490: INFO: Number of nodes with available pods: 0
Mar 17 12:10:00.490: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:01.490: INFO: Number of nodes with available pods: 0
Mar 17 12:10:01.490: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:02.737: INFO: Number of nodes with available pods: 0
Mar 17 12:10:02.737: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:03.494: INFO: Number of nodes with available pods: 0
Mar 17 12:10:03.494: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:04.493: INFO: Number of nodes with available pods: 0
Mar 17 12:10:04.493: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:05.624: INFO: Number of nodes with available pods: 0
Mar 17 12:10:05.624: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:06.510: INFO: Number of nodes with available pods: 0
Mar 17 12:10:06.510: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:07.567: INFO: Number of nodes with available pods: 0
Mar 17 12:10:07.567: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:08.491: INFO: Number of nodes with available pods: 0
Mar 17 12:10:08.491: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:09.494: INFO: Number of nodes with available pods: 0
Mar 17 12:10:09.494: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:10.494: INFO: Number of nodes with available pods: 0
Mar 17 12:10:10.494: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:11.509: INFO: Number of nodes with available pods: 0
Mar 17 12:10:11.509: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:12.490: INFO: Number of nodes with available pods: 0
Mar 17 12:10:12.490: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:13.490: INFO: Number of nodes with available pods: 0
Mar 17 12:10:13.490: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:14.489: INFO: Number of nodes with available pods: 0
Mar 17 12:10:14.489: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:15.632: INFO: Number of nodes with available pods: 0
Mar 17 12:10:15.632: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:17.569: INFO: Number of nodes with available pods: 0
Mar 17 12:10:17.569: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:18.490: INFO: Number of nodes with available pods: 0
Mar 17 12:10:18.490: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:19.493: INFO: Number of nodes with available pods: 0
Mar 17 12:10:19.493: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:20.499: INFO: Number of nodes with available pods: 0
Mar 17 12:10:20.499: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:21.489: INFO: Number of nodes with available pods: 0
Mar 17 12:10:21.489: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:22.491: INFO: Number of nodes with available pods: 0
Mar 17 12:10:22.491: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:23.536: INFO: Number of nodes with available pods: 0
Mar 17 12:10:23.536: INFO: Node kube is running more than one daemon pod
Mar 17 12:10:24.488: INFO: Number of nodes with available pods: 1
Mar 17 12:10:24.488: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-45xlj, will wait for the garbage collector to delete the pods
Mar 17 12:10:24.652: INFO: Deleting DaemonSet.extensions daemon-set took: 110.634433ms
Mar 17 12:10:24.752: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.157792ms
Mar 17 12:10:58.755: INFO: Number of nodes with available pods: 0
Mar 17 12:10:58.755: INFO: Number of running nodes: 0, number of available pods: 0
Mar 17 12:10:58.756: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-45xlj/daemonsets","resourceVersion":"1295812"},"items":null}

Mar 17 12:10:58.758: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-45xlj/pods","resourceVersion":"1295812"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:10:58.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-45xlj" for this suite.
Mar 17 12:11:04.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:11:04.905: INFO: namespace: e2e-tests-daemonsets-45xlj, resource: bindings, ignored listing per whitelist
Mar 17 12:11:04.912: INFO: namespace e2e-tests-daemonsets-45xlj deletion completed in 6.146421205s

• [SLOW TEST:90.965 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:11:04.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 12:11:05.102: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-8q769" to be "success or failure"
Mar 17 12:11:05.118: INFO: Pod "downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.543658ms
Mar 17 12:11:07.123: INFO: Pod "downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021748888s
Mar 17 12:11:09.131: INFO: Pod "downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029559131s
Mar 17 12:11:11.134: INFO: Pod "downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032682892s
STEP: Saw pod success
Mar 17 12:11:11.134: INFO: Pod "downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:11:11.136: INFO: Trying to get logs from node kube pod downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 12:11:11.206: INFO: Waiting for pod downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009 to disappear
Mar 17 12:11:11.494: INFO: Pod downwardapi-volume-bd1e7b21-48ad-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:11:11.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8q769" for this suite.
Mar 17 12:11:17.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:11:17.647: INFO: namespace: e2e-tests-downward-api-8q769, resource: bindings, ignored listing per whitelist
Mar 17 12:11:17.701: INFO: namespace e2e-tests-downward-api-8q769 deletion completed in 6.194227647s

• [SLOW TEST:12.789 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:11:17.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 17 12:11:18.982: INFO: Waiting up to 5m0s for pod "pod-c5612822-48ad-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-29jfb" to be "success or failure"
Mar 17 12:11:19.201: INFO: Pod "pod-c5612822-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 218.568325ms
Mar 17 12:11:21.204: INFO: Pod "pod-c5612822-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221420971s
Mar 17 12:11:23.208: INFO: Pod "pod-c5612822-48ad-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225579198s
STEP: Saw pod success
Mar 17 12:11:23.208: INFO: Pod "pod-c5612822-48ad-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:11:23.210: INFO: Trying to get logs from node kube pod pod-c5612822-48ad-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:11:23.272: INFO: Waiting for pod pod-c5612822-48ad-11e9-bf64-0242ac110009 to disappear
Mar 17 12:11:23.284: INFO: Pod pod-c5612822-48ad-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:11:23.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-29jfb" for this suite.
Mar 17 12:11:29.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:11:29.334: INFO: namespace: e2e-tests-emptydir-29jfb, resource: bindings, ignored listing per whitelist
Mar 17 12:11:29.560: INFO: namespace e2e-tests-emptydir-29jfb deletion completed in 6.271468354s

• [SLOW TEST:11.859 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:11:29.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 12:11:29.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbd4c0bc-48ad-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-7qcx8" to be "success or failure"
Mar 17 12:11:29.814: INFO: Pod "downwardapi-volume-cbd4c0bc-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 30.487644ms
Mar 17 12:11:31.818: INFO: Pod "downwardapi-volume-cbd4c0bc-48ad-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033578459s
Mar 17 12:11:33.820: INFO: Pod "downwardapi-volume-cbd4c0bc-48ad-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036478979s
STEP: Saw pod success
Mar 17 12:11:33.821: INFO: Pod "downwardapi-volume-cbd4c0bc-48ad-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:11:33.823: INFO: Trying to get logs from node kube pod downwardapi-volume-cbd4c0bc-48ad-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 12:11:33.851: INFO: Waiting for pod downwardapi-volume-cbd4c0bc-48ad-11e9-bf64-0242ac110009 to disappear
Mar 17 12:11:33.855: INFO: Pod downwardapi-volume-cbd4c0bc-48ad-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:11:33.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7qcx8" for this suite.
Mar 17 12:11:39.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:11:39.973: INFO: namespace: e2e-tests-downward-api-7qcx8, resource: bindings, ignored listing per whitelist
Mar 17 12:11:39.992: INFO: namespace e2e-tests-downward-api-7qcx8 deletion completed in 6.133831475s

• [SLOW TEST:10.432 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:11:39.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fng8d
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 17 12:11:40.112: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 17 12:12:10.285: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-fng8d PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:12:10.285: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:12:10.514: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:12:10.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-fng8d" for this suite.
Mar 17 12:12:34.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:12:34.655: INFO: namespace: e2e-tests-pod-network-test-fng8d, resource: bindings, ignored listing per whitelist
Mar 17 12:12:34.692: INFO: namespace e2e-tests-pod-network-test-fng8d deletion completed in 24.114323644s

• [SLOW TEST:54.699 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:12:34.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:12:42.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hfxtq" for this suite.
Mar 17 12:12:48.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:12:48.875: INFO: namespace: e2e-tests-kubelet-test-hfxtq, resource: bindings, ignored listing per whitelist
Mar 17 12:12:48.941: INFO: namespace e2e-tests-kubelet-test-hfxtq deletion completed in 6.090537154s

• [SLOW TEST:14.250 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:12:48.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Mar 17 12:12:53.619: INFO: Successfully updated pod "annotationupdatefb1a7ac8-48ad-11e9-bf64-0242ac110009"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:12:55.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fmz2t" for this suite.
Mar 17 12:13:11.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:13:11.715: INFO: namespace: e2e-tests-downward-api-fmz2t, resource: bindings, ignored listing per whitelist
Mar 17 12:13:11.761: INFO: namespace e2e-tests-downward-api-fmz2t deletion completed in 16.100692429s

• [SLOW TEST:22.819 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:13:11.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-tx99l/configmap-test-08b99e33-48ae-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 12:13:12.117: INFO: Waiting up to 5m0s for pod "pod-configmaps-08bd19ed-48ae-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-tx99l" to be "success or failure"
Mar 17 12:13:12.131: INFO: Pod "pod-configmaps-08bd19ed-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.912186ms
Mar 17 12:13:14.161: INFO: Pod "pod-configmaps-08bd19ed-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044585692s
Mar 17 12:13:16.164: INFO: Pod "pod-configmaps-08bd19ed-48ae-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04754595s
STEP: Saw pod success
Mar 17 12:13:16.164: INFO: Pod "pod-configmaps-08bd19ed-48ae-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:13:16.166: INFO: Trying to get logs from node kube pod pod-configmaps-08bd19ed-48ae-11e9-bf64-0242ac110009 container env-test: 
STEP: delete the pod
Mar 17 12:13:16.396: INFO: Waiting for pod pod-configmaps-08bd19ed-48ae-11e9-bf64-0242ac110009 to disappear
Mar 17 12:13:16.409: INFO: Pod pod-configmaps-08bd19ed-48ae-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:13:16.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tx99l" for this suite.
Mar 17 12:13:22.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:13:22.518: INFO: namespace: e2e-tests-configmap-tx99l, resource: bindings, ignored listing per whitelist
Mar 17 12:13:22.527: INFO: namespace e2e-tests-configmap-tx99l deletion completed in 6.114602783s

• [SLOW TEST:10.766 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:13:22.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:13:29.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-wtrkw" for this suite.
Mar 17 12:13:52.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:13:52.628: INFO: namespace: e2e-tests-replication-controller-wtrkw, resource: bindings, ignored listing per whitelist
Mar 17 12:13:52.686: INFO: namespace e2e-tests-replication-controller-wtrkw deletion completed in 22.960660603s

• [SLOW TEST:30.159 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:13:52.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0317 12:14:04.593668       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 17 12:14:04.593: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:14:04.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-dh6kf" for this suite.
Mar 17 12:14:13.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:14:13.995: INFO: namespace: e2e-tests-gc-dh6kf, resource: bindings, ignored listing per whitelist
Mar 17 12:14:14.023: INFO: namespace e2e-tests-gc-dh6kf deletion completed in 9.127926189s

• [SLOW TEST:21.336 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:14:14.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2e22e392-48ae-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 12:14:15.016: INFO: Waiting up to 5m0s for pod "pod-secrets-2e27b0a8-48ae-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-dc598" to be "success or failure"
Mar 17 12:14:15.103: INFO: Pod "pod-secrets-2e27b0a8-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 87.621468ms
Mar 17 12:14:17.217: INFO: Pod "pod-secrets-2e27b0a8-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201484348s
Mar 17 12:14:19.220: INFO: Pod "pod-secrets-2e27b0a8-48ae-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204832029s
STEP: Saw pod success
Mar 17 12:14:19.220: INFO: Pod "pod-secrets-2e27b0a8-48ae-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:14:19.223: INFO: Trying to get logs from node kube pod pod-secrets-2e27b0a8-48ae-11e9-bf64-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Mar 17 12:14:19.296: INFO: Waiting for pod pod-secrets-2e27b0a8-48ae-11e9-bf64-0242ac110009 to disappear
Mar 17 12:14:19.342: INFO: Pod pod-secrets-2e27b0a8-48ae-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:14:19.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dc598" for this suite.
Mar 17 12:14:25.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:14:25.690: INFO: namespace: e2e-tests-secrets-dc598, resource: bindings, ignored listing per whitelist
Mar 17 12:14:25.837: INFO: namespace e2e-tests-secrets-dc598 deletion completed in 6.492820834s

• [SLOW TEST:11.814 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:14:25.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-kd6qp.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-kd6qp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-kd6qp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-kd6qp.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-kd6qp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-kd6qp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 17 12:14:32.200: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.202: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.204: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.205: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.207: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.209: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.212: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-kd6qp.svc.cluster.local from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.214: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.216: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.218: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009: the server could not find the requested resource (get pods dns-test-34e15d50-48ae-11e9-bf64-0242ac110009)
Mar 17 12:14:32.239: INFO: Lookups using e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-kd6qp.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord]

Mar 17 12:14:37.438: INFO: DNS probes using e2e-tests-dns-kd6qp/dns-test-34e15d50-48ae-11e9-bf64-0242ac110009 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:14:37.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-kd6qp" for this suite.
Mar 17 12:14:43.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:14:43.965: INFO: namespace: e2e-tests-dns-kd6qp, resource: bindings, ignored listing per whitelist
Mar 17 12:14:44.007: INFO: namespace e2e-tests-dns-kd6qp deletion completed in 6.301458693s

• [SLOW TEST:18.169 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:14:44.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Mar 17 12:14:44.118: INFO: Waiting up to 5m0s for pod "downward-api-3fa493ae-48ae-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-86btv" to be "success or failure"
Mar 17 12:14:44.129: INFO: Pod "downward-api-3fa493ae-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.499549ms
Mar 17 12:14:46.133: INFO: Pod "downward-api-3fa493ae-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015815784s
Mar 17 12:14:48.138: INFO: Pod "downward-api-3fa493ae-48ae-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020343287s
STEP: Saw pod success
Mar 17 12:14:48.138: INFO: Pod "downward-api-3fa493ae-48ae-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:14:48.141: INFO: Trying to get logs from node kube pod downward-api-3fa493ae-48ae-11e9-bf64-0242ac110009 container dapi-container: 
STEP: delete the pod
Mar 17 12:14:48.290: INFO: Waiting for pod downward-api-3fa493ae-48ae-11e9-bf64-0242ac110009 to disappear
Mar 17 12:14:48.333: INFO: Pod downward-api-3fa493ae-48ae-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:14:48.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-86btv" for this suite.
Mar 17 12:14:54.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:14:54.456: INFO: namespace: e2e-tests-downward-api-86btv, resource: bindings, ignored listing per whitelist
Mar 17 12:14:54.472: INFO: namespace e2e-tests-downward-api-86btv deletion completed in 6.135975127s

• [SLOW TEST:10.465 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:14:54.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Mar 17 12:14:55.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-pwczp run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Mar 17 12:15:00.791: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Mar 17 12:15:00.791: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:15:02.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pwczp" for this suite.
Mar 17 12:15:08.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:15:08.836: INFO: namespace: e2e-tests-kubectl-pwczp, resource: bindings, ignored listing per whitelist
Mar 17 12:15:08.887: INFO: namespace e2e-tests-kubectl-pwczp deletion completed in 6.089567303s

• [SLOW TEST:14.415 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:15:08.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Mar 17 12:15:13.709: INFO: Successfully updated pod "annotationupdate4e940d02-48ae-11e9-bf64-0242ac110009"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:15:15.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-x56vz" for this suite.
Mar 17 12:15:37.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:15:37.919: INFO: namespace: e2e-tests-projected-x56vz, resource: bindings, ignored listing per whitelist
Mar 17 12:15:37.953: INFO: namespace e2e-tests-projected-x56vz deletion completed in 22.202269861s

• [SLOW TEST:29.066 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:15:37.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-h8pjd
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-h8pjd to expose endpoints map[]
Mar 17 12:15:38.459: INFO: Get endpoints failed (9.826096ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Mar 17 12:15:39.462: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-h8pjd exposes endpoints map[] (1.012979924s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-h8pjd
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-h8pjd to expose endpoints map[pod1:[100]]
Mar 17 12:15:43.852: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.380650672s elapsed, will retry)
Mar 17 12:15:44.869: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-h8pjd exposes endpoints map[pod1:[100]] (5.397350892s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-h8pjd
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-h8pjd to expose endpoints map[pod1:[100] pod2:[101]]
Mar 17 12:15:49.177: INFO: Unexpected endpoints: found map[60a9aaf0-48ae-11e9-a072-fa163e921bae:[100]], expected map[pod1:[100] pod2:[101]] (4.274278377s elapsed, will retry)
Mar 17 12:15:50.187: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-h8pjd exposes endpoints map[pod1:[100] pod2:[101]] (5.283977698s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-h8pjd
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-h8pjd to expose endpoints map[pod2:[101]]
Mar 17 12:15:51.229: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-h8pjd exposes endpoints map[pod2:[101]] (1.040098527s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-h8pjd
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-h8pjd to expose endpoints map[]
Mar 17 12:15:52.308: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-h8pjd exposes endpoints map[] (1.076135782s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:15:52.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-h8pjd" for this suite.
Mar 17 12:15:58.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:15:58.761: INFO: namespace: e2e-tests-services-h8pjd, resource: bindings, ignored listing per whitelist
Mar 17 12:15:58.768: INFO: namespace e2e-tests-services-h8pjd deletion completed in 6.280613503s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:20.815 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:15:58.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 17 12:15:58.934: INFO: Waiting up to 5m0s for pod "pod-6c39227b-48ae-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-9g6xs" to be "success or failure"
Mar 17 12:15:58.946: INFO: Pod "pod-6c39227b-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.326161ms
Mar 17 12:16:00.948: INFO: Pod "pod-6c39227b-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013961593s
Mar 17 12:16:02.952: INFO: Pod "pod-6c39227b-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017637997s
Mar 17 12:16:04.955: INFO: Pod "pod-6c39227b-48ae-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020933955s
STEP: Saw pod success
Mar 17 12:16:04.955: INFO: Pod "pod-6c39227b-48ae-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:16:04.957: INFO: Trying to get logs from node kube pod pod-6c39227b-48ae-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:16:05.015: INFO: Waiting for pod pod-6c39227b-48ae-11e9-bf64-0242ac110009 to disappear
Mar 17 12:16:05.018: INFO: Pod pod-6c39227b-48ae-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:16:05.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9g6xs" for this suite.
Mar 17 12:16:11.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:16:11.126: INFO: namespace: e2e-tests-emptydir-9g6xs, resource: bindings, ignored listing per whitelist
Mar 17 12:16:11.164: INFO: namespace e2e-tests-emptydir-9g6xs deletion completed in 6.14015598s

• [SLOW TEST:12.396 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:16:11.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 12:16:11.817: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73a423f7-48ae-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-9hlbm" to be "success or failure"
Mar 17 12:16:11.827: INFO: Pod "downwardapi-volume-73a423f7-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.441223ms
Mar 17 12:16:13.949: INFO: Pod "downwardapi-volume-73a423f7-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132328081s
Mar 17 12:16:15.952: INFO: Pod "downwardapi-volume-73a423f7-48ae-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135275223s
STEP: Saw pod success
Mar 17 12:16:15.952: INFO: Pod "downwardapi-volume-73a423f7-48ae-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:16:15.955: INFO: Trying to get logs from node kube pod downwardapi-volume-73a423f7-48ae-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 12:16:16.000: INFO: Waiting for pod downwardapi-volume-73a423f7-48ae-11e9-bf64-0242ac110009 to disappear
Mar 17 12:16:16.168: INFO: Pod downwardapi-volume-73a423f7-48ae-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:16:16.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9hlbm" for this suite.
Mar 17 12:16:22.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:16:22.473: INFO: namespace: e2e-tests-projected-9hlbm, resource: bindings, ignored listing per whitelist
Mar 17 12:16:22.537: INFO: namespace e2e-tests-projected-9hlbm deletion completed in 6.356818824s

• [SLOW TEST:11.373 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:16:22.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:16:22.733: INFO: (0) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.902859ms)
Mar 17 12:16:22.738: INFO: (1) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.026079ms)
Mar 17 12:16:22.742: INFO: (2) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.626623ms)
Mar 17 12:16:22.745: INFO: (3) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.04991ms)
Mar 17 12:16:22.751: INFO: (4) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.454938ms)
Mar 17 12:16:22.755: INFO: (5) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.381694ms)
Mar 17 12:16:22.759: INFO: (6) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.640405ms)
Mar 17 12:16:22.763: INFO: (7) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.625885ms)
Mar 17 12:16:22.767: INFO: (8) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.41457ms)
Mar 17 12:16:22.772: INFO: (9) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.430668ms)
Mar 17 12:16:22.776: INFO: (10) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.463549ms)
Mar 17 12:16:22.779: INFO: (11) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.386759ms)
Mar 17 12:16:22.783: INFO: (12) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.795602ms)
Mar 17 12:16:22.791: INFO: (13) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.900882ms)
Mar 17 12:16:22.795: INFO: (14) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.209693ms)
Mar 17 12:16:22.801: INFO: (15) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.627143ms)
Mar 17 12:16:22.804: INFO: (16) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.281943ms)
Mar 17 12:16:22.808: INFO: (17) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.915809ms)
Mar 17 12:16:22.816: INFO: (18) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.649679ms)
Mar 17 12:16:22.871: INFO: (19) /api/v1/nodes/kube:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 55.141174ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:16:22.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kns46" for this suite.
Mar 17 12:16:28.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:16:29.084: INFO: namespace: e2e-tests-proxy-kns46, resource: bindings, ignored listing per whitelist
Mar 17 12:16:29.096: INFO: namespace e2e-tests-proxy-kns46 deletion completed in 6.22275298s

• [SLOW TEST:6.559 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:16:29.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-7e66ca2d-48ae-11e9-bf64-0242ac110009
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-7e66ca2d-48ae-11e9-bf64-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:17:36.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dt8gz" for this suite.
Mar 17 12:17:58.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:17:58.897: INFO: namespace: e2e-tests-configmap-dt8gz, resource: bindings, ignored listing per whitelist
Mar 17 12:17:58.957: INFO: namespace e2e-tests-configmap-dt8gz deletion completed in 22.08896248s

• [SLOW TEST:89.860 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:17:58.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Mar 17 12:18:03.182: INFO: Pod pod-hostip-b3e82033-48ae-11e9-bf64-0242ac110009 has hostIP: 192.168.100.7
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:18:03.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mvbgb" for this suite.
Mar 17 12:18:25.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:18:25.254: INFO: namespace: e2e-tests-pods-mvbgb, resource: bindings, ignored listing per whitelist
Mar 17 12:18:25.323: INFO: namespace e2e-tests-pods-mvbgb deletion completed in 22.139008587s

• [SLOW TEST:26.366 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:18:25.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-c39bb828-48ae-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 12:18:25.510: INFO: Waiting up to 5m0s for pod "pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-lcw89" to be "success or failure"
Mar 17 12:18:25.526: INFO: Pod "pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.439713ms
Mar 17 12:18:27.530: INFO: Pod "pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019967048s
Mar 17 12:18:29.747: INFO: Pod "pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237493006s
Mar 17 12:18:31.750: INFO: Pod "pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24007391s
STEP: Saw pod success
Mar 17 12:18:31.750: INFO: Pod "pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:18:31.751: INFO: Trying to get logs from node kube pod pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Mar 17 12:18:31.983: INFO: Waiting for pod pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009 to disappear
Mar 17 12:18:31.988: INFO: Pod pod-secrets-c39de6f0-48ae-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:18:31.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lcw89" for this suite.
Mar 17 12:18:38.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:18:38.112: INFO: namespace: e2e-tests-secrets-lcw89, resource: bindings, ignored listing per whitelist
Mar 17 12:18:38.120: INFO: namespace e2e-tests-secrets-lcw89 deletion completed in 6.130786202s

• [SLOW TEST:12.798 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:18:38.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Mar 17 12:18:38.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-n4qqv'
Mar 17 12:18:38.441: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 17 12:18:38.441: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Mar 17 12:18:42.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-n4qqv'
Mar 17 12:18:42.649: INFO: stderr: ""
Mar 17 12:18:42.649: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:18:42.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n4qqv" for this suite.
Mar 17 12:18:50.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:18:50.804: INFO: namespace: e2e-tests-kubectl-n4qqv, resource: bindings, ignored listing per whitelist
Mar 17 12:18:50.814: INFO: namespace e2e-tests-kubectl-n4qqv deletion completed in 8.163252056s

• [SLOW TEST:12.693 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:18:50.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Mar 17 12:18:52.162: INFO: Waiting up to 5m0s for pod "client-containers-d37fd86a-48ae-11e9-bf64-0242ac110009" in namespace "e2e-tests-containers-k5sds" to be "success or failure"
Mar 17 12:18:52.390: INFO: Pod "client-containers-d37fd86a-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 228.204017ms
Mar 17 12:18:54.396: INFO: Pod "client-containers-d37fd86a-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23360443s
Mar 17 12:18:56.400: INFO: Pod "client-containers-d37fd86a-48ae-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.238011103s
STEP: Saw pod success
Mar 17 12:18:56.400: INFO: Pod "client-containers-d37fd86a-48ae-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:18:56.403: INFO: Trying to get logs from node kube pod client-containers-d37fd86a-48ae-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:18:56.575: INFO: Waiting for pod client-containers-d37fd86a-48ae-11e9-bf64-0242ac110009 to disappear
Mar 17 12:18:56.579: INFO: Pod client-containers-d37fd86a-48ae-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:18:56.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-k5sds" for this suite.
Mar 17 12:19:02.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:19:02.631: INFO: namespace: e2e-tests-containers-k5sds, resource: bindings, ignored listing per whitelist
Mar 17 12:19:02.677: INFO: namespace e2e-tests-containers-k5sds deletion completed in 6.095101849s

• [SLOW TEST:11.863 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:19:02.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-7lgss in namespace e2e-tests-proxy-clmww
I0317 12:19:02.957903       8 runners.go:184] Created replication controller with name: proxy-service-7lgss, namespace: e2e-tests-proxy-clmww, replica count: 1
I0317 12:19:04.008275       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0317 12:19:05.008471       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0317 12:19:06.008644       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0317 12:19:07.008809       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0317 12:19:08.008959       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0317 12:19:09.009140       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0317 12:19:10.009293       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0317 12:19:11.009445       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0317 12:19:12.009628       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0317 12:19:13.009783       8 runners.go:184] proxy-service-7lgss Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 17 12:19:13.012: INFO: setup took 10.21557198s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Mar 17 12:19:13.019: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-clmww/pods/http:proxy-service-7lgss-dxn55:160/proxy/: foo (200; 6.679506ms)
Mar 17 12:19:13.030: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-clmww/services/http:proxy-service-7lgss:portname1/proxy/: foo (200; 17.902377ms)
Mar 17 12:19:13.031: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-clmww/pods/proxy-service-7lgss-dxn55:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Mar 17 12:19:34.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Mar 17 12:19:34.487: INFO: stderr: ""
Mar 17 12:19:34.487: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:19:34.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f7r2l" for this suite.
Mar 17 12:19:40.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:19:40.604: INFO: namespace: e2e-tests-kubectl-f7r2l, resource: bindings, ignored listing per whitelist
Mar 17 12:19:40.638: INFO: namespace e2e-tests-kubectl-f7r2l deletion completed in 6.14816087s

• [SLOW TEST:6.349 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:19:40.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 17 12:19:40.839: INFO: Waiting up to 5m0s for pod "pod-f08349f8-48ae-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-hhl7q" to be "success or failure"
Mar 17 12:19:40.842: INFO: Pod "pod-f08349f8-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259008ms
Mar 17 12:19:42.867: INFO: Pod "pod-f08349f8-48ae-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028371718s
Mar 17 12:19:44.870: INFO: Pod "pod-f08349f8-48ae-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031074483s
STEP: Saw pod success
Mar 17 12:19:44.870: INFO: Pod "pod-f08349f8-48ae-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:19:44.872: INFO: Trying to get logs from node kube pod pod-f08349f8-48ae-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:19:44.898: INFO: Waiting for pod pod-f08349f8-48ae-11e9-bf64-0242ac110009 to disappear
Mar 17 12:19:44.901: INFO: Pod pod-f08349f8-48ae-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:19:44.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hhl7q" for this suite.
Mar 17 12:19:50.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:19:50.996: INFO: namespace: e2e-tests-emptydir-hhl7q, resource: bindings, ignored listing per whitelist
Mar 17 12:19:51.011: INFO: namespace e2e-tests-emptydir-hhl7q deletion completed in 6.089597975s

• [SLOW TEST:10.372 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:19:51.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:19:51.246: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Mar 17 12:19:51.420: INFO: Number of nodes with available pods: 0
Mar 17 12:19:51.420: INFO: Node kube is running more than one daemon pod
Mar 17 12:19:52.425: INFO: Number of nodes with available pods: 0
Mar 17 12:19:52.425: INFO: Node kube is running more than one daemon pod
Mar 17 12:19:53.431: INFO: Number of nodes with available pods: 0
Mar 17 12:19:53.431: INFO: Node kube is running more than one daemon pod
Mar 17 12:19:54.425: INFO: Number of nodes with available pods: 1
Mar 17 12:19:54.425: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Mar 17 12:19:54.456: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:19:55.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:19:56.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:19:57.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:19:58.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:19:59.509: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:00.508: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:01.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:02.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:03.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:04.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:05.508: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:06.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:07.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:08.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:09.508: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:10.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:11.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:12.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:13.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:14.508: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:15.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:16.510: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:17.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:18.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:19.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:20.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:21.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:22.506: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:23.508: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:24.511: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:25.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:26.508: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:27.507: INFO: Wrong image for pod: daemon-set-m9fp4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
Mar 17 12:20:27.507: INFO: Pod daemon-set-m9fp4 is not available
Mar 17 12:20:28.514: INFO: Pod daemon-set-v7nbx is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Mar 17 12:20:28.520: INFO: Number of nodes with available pods: 0
Mar 17 12:20:28.520: INFO: Node kube is running more than one daemon pod
Mar 17 12:20:29.527: INFO: Number of nodes with available pods: 0
Mar 17 12:20:29.527: INFO: Node kube is running more than one daemon pod
Mar 17 12:20:30.608: INFO: Number of nodes with available pods: 0
Mar 17 12:20:30.608: INFO: Node kube is running more than one daemon pod
Mar 17 12:20:31.532: INFO: Number of nodes with available pods: 1
Mar 17 12:20:31.532: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tbh9v, will wait for the garbage collector to delete the pods
Mar 17 12:20:31.600: INFO: Deleting DaemonSet.extensions daemon-set took: 4.735276ms
Mar 17 12:20:32.900: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.30021894s
Mar 17 12:20:36.053: INFO: Number of nodes with available pods: 0
Mar 17 12:20:36.053: INFO: Number of running nodes: 0, number of available pods: 0
Mar 17 12:20:36.056: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tbh9v/daemonsets","resourceVersion":"1297424"},"items":null}

Mar 17 12:20:36.058: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tbh9v/pods","resourceVersion":"1297424"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:20:36.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-tbh9v" for this suite.
Mar 17 12:20:42.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:20:42.151: INFO: namespace: e2e-tests-daemonsets-tbh9v, resource: bindings, ignored listing per whitelist
Mar 17 12:20:42.168: INFO: namespace e2e-tests-daemonsets-tbh9v deletion completed in 6.096270744s

• [SLOW TEST:51.157 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:20:42.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:20:42.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:20:46.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pvlfx" for this suite.
Mar 17 12:21:26.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:21:26.532: INFO: namespace: e2e-tests-pods-pvlfx, resource: bindings, ignored listing per whitelist
Mar 17 12:21:26.550: INFO: namespace e2e-tests-pods-pvlfx deletion completed in 40.119551735s

• [SLOW TEST:44.382 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:21:26.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:21:26.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:21:30.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5jqhq" for this suite.
Mar 17 12:22:23.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:22:23.032: INFO: namespace: e2e-tests-pods-5jqhq, resource: bindings, ignored listing per whitelist
Mar 17 12:22:23.076: INFO: namespace e2e-tests-pods-5jqhq deletion completed in 52.103746507s

• [SLOW TEST:56.526 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:22:23.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-514aeb5e-48af-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 12:22:23.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-514b54d7-48af-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-wbwlg" to be "success or failure"
Mar 17 12:22:23.206: INFO: Pod "pod-configmaps-514b54d7-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.313703ms
Mar 17 12:22:25.250: INFO: Pod "pod-configmaps-514b54d7-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055120675s
Mar 17 12:22:27.275: INFO: Pod "pod-configmaps-514b54d7-48af-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0800842s
STEP: Saw pod success
Mar 17 12:22:27.275: INFO: Pod "pod-configmaps-514b54d7-48af-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:22:27.299: INFO: Trying to get logs from node kube pod pod-configmaps-514b54d7-48af-11e9-bf64-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Mar 17 12:22:27.409: INFO: Waiting for pod pod-configmaps-514b54d7-48af-11e9-bf64-0242ac110009 to disappear
Mar 17 12:22:27.461: INFO: Pod pod-configmaps-514b54d7-48af-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:22:27.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wbwlg" for this suite.
Mar 17 12:22:33.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:22:33.536: INFO: namespace: e2e-tests-configmap-wbwlg, resource: bindings, ignored listing per whitelist
Mar 17 12:22:33.554: INFO: namespace e2e-tests-configmap-wbwlg deletion completed in 6.090627448s

• [SLOW TEST:10.477 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:22:33.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:22:37.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qf79f" for this suite.
Mar 17 12:23:23.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:23:23.767: INFO: namespace: e2e-tests-kubelet-test-qf79f, resource: bindings, ignored listing per whitelist
Mar 17 12:23:23.817: INFO: namespace e2e-tests-kubelet-test-qf79f deletion completed in 46.089470304s

• [SLOW TEST:50.263 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:23:23.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-76d43ade-48af-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 12:23:26.186: INFO: Waiting up to 5m0s for pod "pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-s2qqv" to be "success or failure"
Mar 17 12:23:26.209: INFO: Pod "pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 23.587056ms
Mar 17 12:23:28.213: INFO: Pod "pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027318146s
Mar 17 12:23:30.347: INFO: Pod "pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161401746s
Mar 17 12:23:32.350: INFO: Pod "pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.16382815s
STEP: Saw pod success
Mar 17 12:23:32.350: INFO: Pod "pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:23:32.357: INFO: Trying to get logs from node kube pod pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Mar 17 12:23:32.380: INFO: Waiting for pod pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009 to disappear
Mar 17 12:23:32.400: INFO: Pod pod-configmaps-76d4c292-48af-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:23:32.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-s2qqv" for this suite.
Mar 17 12:23:40.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:23:40.748: INFO: namespace: e2e-tests-configmap-s2qqv, resource: bindings, ignored listing per whitelist
Mar 17 12:23:40.758: INFO: namespace e2e-tests-configmap-s2qqv deletion completed in 8.355320509s

• [SLOW TEST:16.941 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:23:40.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-7fb0bd1d-48af-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 12:23:41.053: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-k42t9" to be "success or failure"
Mar 17 12:23:41.186: INFO: Pod "pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 132.711118ms
Mar 17 12:23:43.217: INFO: Pod "pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163975051s
Mar 17 12:23:45.804: INFO: Pod "pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.750768293s
Mar 17 12:23:47.807: INFO: Pod "pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.754259479s
STEP: Saw pod success
Mar 17 12:23:47.808: INFO: Pod "pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:23:47.812: INFO: Trying to get logs from node kube pod pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 17 12:23:48.017: INFO: Waiting for pod pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009 to disappear
Mar 17 12:23:48.034: INFO: Pod pod-projected-configmaps-7fb361b2-48af-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:23:48.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k42t9" for this suite.
Mar 17 12:23:54.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:23:54.070: INFO: namespace: e2e-tests-projected-k42t9, resource: bindings, ignored listing per whitelist
Mar 17 12:23:54.164: INFO: namespace e2e-tests-projected-k42t9 deletion completed in 6.12526421s

• [SLOW TEST:13.405 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:23:54.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:24:02.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hbl2z" for this suite.
Mar 17 12:24:48.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:24:48.518: INFO: namespace: e2e-tests-kubelet-test-hbl2z, resource: bindings, ignored listing per whitelist
Mar 17 12:24:48.522: INFO: namespace e2e-tests-kubelet-test-hbl2z deletion completed in 46.104771628s

• [SLOW TEST:54.358 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:24:48.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-89nr2
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 17 12:24:48.616: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 17 12:25:14.890: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-89nr2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:25:14.890: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:25:15.039: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:25:15.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-89nr2" for this suite.
Mar 17 12:25:39.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:25:39.163: INFO: namespace: e2e-tests-pod-network-test-89nr2, resource: bindings, ignored listing per whitelist
Mar 17 12:25:39.189: INFO: namespace e2e-tests-pod-network-test-89nr2 deletion completed in 24.147001726s

• [SLOW TEST:50.668 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:25:39.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-fh78j
Mar 17 12:25:43.416: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-fh78j
STEP: checking the pod's current state and verifying that restartCount is present
Mar 17 12:25:43.418: INFO: Initial restart count of pod liveness-exec is 0
Mar 17 12:26:35.648: INFO: Restart count of pod e2e-tests-container-probe-fh78j/liveness-exec is now 1 (52.230452915s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:26:35.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fh78j" for this suite.
Mar 17 12:26:41.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:26:41.855: INFO: namespace: e2e-tests-container-probe-fh78j, resource: bindings, ignored listing per whitelist
Mar 17 12:26:41.870: INFO: namespace e2e-tests-container-probe-fh78j deletion completed in 6.090780296s

• [SLOW TEST:62.680 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:26:41.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 17 12:26:42.187: INFO: Waiting up to 5m0s for pod "pod-eba9eece-48af-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-wmzwd" to be "success or failure"
Mar 17 12:26:42.273: INFO: Pod "pod-eba9eece-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 86.359299ms
Mar 17 12:26:44.276: INFO: Pod "pod-eba9eece-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089225101s
Mar 17 12:26:46.280: INFO: Pod "pod-eba9eece-48af-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093180529s
Mar 17 12:26:48.283: INFO: Pod "pod-eba9eece-48af-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09558009s
STEP: Saw pod success
Mar 17 12:26:48.283: INFO: Pod "pod-eba9eece-48af-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:26:48.284: INFO: Trying to get logs from node kube pod pod-eba9eece-48af-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:26:48.421: INFO: Waiting for pod pod-eba9eece-48af-11e9-bf64-0242ac110009 to disappear
Mar 17 12:26:48.432: INFO: Pod pod-eba9eece-48af-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:26:48.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wmzwd" for this suite.
Mar 17 12:26:54.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:26:54.491: INFO: namespace: e2e-tests-emptydir-wmzwd, resource: bindings, ignored listing per whitelist
Mar 17 12:26:54.548: INFO: namespace e2e-tests-emptydir-wmzwd deletion completed in 6.113814617s

• [SLOW TEST:12.678 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:26:54.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-x5jx6
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-x5jx6
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-x5jx6
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-x5jx6
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-x5jx6
Mar 17 12:27:02.796: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x5jx6, name: ss-0, uid: f60a0581-48af-11e9-a072-fa163e921bae, status phase: Pending. Waiting for statefulset controller to delete.
Mar 17 12:27:07.896: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x5jx6, name: ss-0, uid: f60a0581-48af-11e9-a072-fa163e921bae, status phase: Failed. Waiting for statefulset controller to delete.
Mar 17 12:27:07.913: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x5jx6, name: ss-0, uid: f60a0581-48af-11e9-a072-fa163e921bae, status phase: Failed. Waiting for statefulset controller to delete.
Mar 17 12:27:07.932: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-x5jx6
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-x5jx6
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-x5jx6 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Mar 17 12:27:18.546: INFO: Deleting all statefulset in ns e2e-tests-statefulset-x5jx6
Mar 17 12:27:18.548: INFO: Scaling statefulset ss to 0
Mar 17 12:27:28.706: INFO: Waiting for statefulset status.replicas updated to 0
Mar 17 12:27:28.709: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:27:28.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-x5jx6" for this suite.
Mar 17 12:27:34.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:27:34.893: INFO: namespace: e2e-tests-statefulset-x5jx6, resource: bindings, ignored listing per whitelist
Mar 17 12:27:34.909: INFO: namespace e2e-tests-statefulset-x5jx6 deletion completed in 6.169732248s

• [SLOW TEST:40.362 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:27:34.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Mar 17 12:27:41.253: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-0b3c9a87-48b0-11e9-bf64-0242ac110009", GenerateName:"", Namespace:"e2e-tests-pods-h8nvq", SelfLink:"/api/v1/namespaces/e2e-tests-pods-h8nvq/pods/pod-submit-remove-0b3c9a87-48b0-11e9-bf64-0242ac110009", UID:"0b3eeaa2-48b0-11e9-a072-fa163e921bae", ResourceVersion:"1298380", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688422455, loc:(*time.Location)(0x7b13a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"147589120"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-th9mj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020b4a80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-th9mj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ea25a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kube", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0027123c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ea2670)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ea2690)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001ea2698), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ea269c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688422455, loc:(*time.Location)(0x7b13a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688422459, loc:(*time.Location)(0x7b13a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688422459, loc:(*time.Location)(0x7b13a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688422455, loc:(*time.Location)(0x7b13a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.100.7", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001f02460), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001f02480), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:b67e90a1d8088f0e205c77c793c271524773a6de163fb3855b1c1bedf979da7d", ContainerID:"docker://749ef9666ba693b88801899c298d3d0f9656f31014ae64752c84be81c35f4c3f"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Mar 17 12:27:46.325: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:27:46.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-h8nvq" for this suite.
Mar 17 12:27:52.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:27:52.365: INFO: namespace: e2e-tests-pods-h8nvq, resource: bindings, ignored listing per whitelist
Mar 17 12:27:52.419: INFO: namespace e2e-tests-pods-h8nvq deletion completed in 6.089457306s

• [SLOW TEST:17.509 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:27:52.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 12:27:52.611: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15a38939-48b0-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-pl2w8" to be "success or failure"
Mar 17 12:27:52.648: INFO: Pod "downwardapi-volume-15a38939-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 37.187957ms
Mar 17 12:27:54.652: INFO: Pod "downwardapi-volume-15a38939-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040834836s
Mar 17 12:27:56.655: INFO: Pod "downwardapi-volume-15a38939-48b0-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04363439s
STEP: Saw pod success
Mar 17 12:27:56.655: INFO: Pod "downwardapi-volume-15a38939-48b0-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:27:56.656: INFO: Trying to get logs from node kube pod downwardapi-volume-15a38939-48b0-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 12:27:56.768: INFO: Waiting for pod downwardapi-volume-15a38939-48b0-11e9-bf64-0242ac110009 to disappear
Mar 17 12:27:56.783: INFO: Pod downwardapi-volume-15a38939-48b0-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:27:56.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pl2w8" for this suite.
Mar 17 12:28:02.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:28:02.905: INFO: namespace: e2e-tests-projected-pl2w8, resource: bindings, ignored listing per whitelist
Mar 17 12:28:02.915: INFO: namespace e2e-tests-projected-pl2w8 deletion completed in 6.128168312s

• [SLOW TEST:10.496 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:28:02.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-1bdeb265-48b0-11e9-bf64-0242ac110009
STEP: Creating secret with name s-test-opt-upd-1bdeb2a5-48b0-11e9-bf64-0242ac110009
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1bdeb265-48b0-11e9-bf64-0242ac110009
STEP: Updating secret s-test-opt-upd-1bdeb2a5-48b0-11e9-bf64-0242ac110009
STEP: Creating secret with name s-test-opt-create-1bdeb2bd-48b0-11e9-bf64-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:29:19.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8v2dq" for this suite.
Mar 17 12:29:45.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:29:45.820: INFO: namespace: e2e-tests-projected-8v2dq, resource: bindings, ignored listing per whitelist
Mar 17 12:29:45.855: INFO: namespace e2e-tests-projected-8v2dq deletion completed in 26.079681575s

• [SLOW TEST:102.940 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:29:45.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-5931cd6c-48b0-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 12:29:45.953: INFO: Waiting up to 5m0s for pod "pod-secrets-5932442b-48b0-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-h552l" to be "success or failure"
Mar 17 12:29:46.133: INFO: Pod "pod-secrets-5932442b-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 180.076365ms
Mar 17 12:29:48.135: INFO: Pod "pod-secrets-5932442b-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18267471s
Mar 17 12:29:50.211: INFO: Pod "pod-secrets-5932442b-48b0-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.258040171s
STEP: Saw pod success
Mar 17 12:29:50.211: INFO: Pod "pod-secrets-5932442b-48b0-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:29:50.215: INFO: Trying to get logs from node kube pod pod-secrets-5932442b-48b0-11e9-bf64-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Mar 17 12:29:50.515: INFO: Waiting for pod pod-secrets-5932442b-48b0-11e9-bf64-0242ac110009 to disappear
Mar 17 12:29:50.553: INFO: Pod pod-secrets-5932442b-48b0-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:29:50.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-h552l" for this suite.
Mar 17 12:29:56.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:29:56.620: INFO: namespace: e2e-tests-secrets-h552l, resource: bindings, ignored listing per whitelist
Mar 17 12:29:56.744: INFO: namespace e2e-tests-secrets-h552l deletion completed in 6.188453474s

• [SLOW TEST:10.889 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:29:56.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-zf2w4
Mar 17 12:30:00.976: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-zf2w4
STEP: checking the pod's current state and verifying that restartCount is present
Mar 17 12:30:00.978: INFO: Initial restart count of pod liveness-http is 0
Mar 17 12:30:21.173: INFO: Restart count of pod e2e-tests-container-probe-zf2w4/liveness-http is now 1 (20.19486895s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:30:21.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-zf2w4" for this suite.
Mar 17 12:30:29.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:30:29.513: INFO: namespace: e2e-tests-container-probe-zf2w4, resource: bindings, ignored listing per whitelist
Mar 17 12:30:29.528: INFO: namespace e2e-tests-container-probe-zf2w4 deletion completed in 8.32741274s

• [SLOW TEST:32.784 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:30:29.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-v2qwq
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-v2qwq
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-v2qwq
Mar 17 12:30:29.788: INFO: Found 0 stateful pods, waiting for 1
Mar 17 12:30:39.800: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Mar 17 12:30:39.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v2qwq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Mar 17 12:30:40.164: INFO: stderr: ""
Mar 17 12:30:40.164: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Mar 17 12:30:40.164: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Mar 17 12:30:40.167: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar 17 12:30:50.172: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 17 12:30:50.172: INFO: Waiting for statefulset status.replicas updated to 0
Mar 17 12:30:50.548: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999263s
Mar 17 12:30:51.769: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.633205546s
Mar 17 12:30:52.887: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.412255693s
Mar 17 12:30:53.891: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.29374696s
Mar 17 12:30:54.895: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.289787248s
Mar 17 12:30:55.898: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.286293923s
Mar 17 12:30:57.015: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.282720893s
Mar 17 12:30:58.019: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.165493241s
Mar 17 12:30:59.021: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.162448406s
Mar 17 12:31:00.104: INFO: Verifying statefulset ss doesn't scale past 1 for another 159.582282ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-v2qwq
Mar 17 12:31:01.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v2qwq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Mar 17 12:31:01.351: INFO: stderr: ""
Mar 17 12:31:01.351: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Mar 17 12:31:01.351: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Mar 17 12:31:01.355: INFO: Found 1 stateful pods, waiting for 3
Mar 17 12:31:11.361: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:31:11.361: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:31:11.361: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Mar 17 12:31:21.358: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:31:21.358: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 17 12:31:21.358: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Mar 17 12:31:21.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v2qwq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Mar 17 12:31:21.650: INFO: stderr: ""
Mar 17 12:31:21.650: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Mar 17 12:31:21.650: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Mar 17 12:31:21.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v2qwq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Mar 17 12:31:21.965: INFO: stderr: ""
Mar 17 12:31:21.965: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Mar 17 12:31:21.965: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Mar 17 12:31:21.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v2qwq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Mar 17 12:31:22.284: INFO: stderr: ""
Mar 17 12:31:22.284: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Mar 17 12:31:22.284: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Mar 17 12:31:22.284: INFO: Waiting for statefulset status.replicas updated to 0
Mar 17 12:31:22.287: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Mar 17 12:31:32.315: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 17 12:31:32.315: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar 17 12:31:32.315: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar 17 12:31:32.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999857s
Mar 17 12:31:33.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.91320199s
Mar 17 12:31:34.414: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.910043614s
Mar 17 12:31:37.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.905485702s
Mar 17 12:31:38.780: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.727854168s
Mar 17 12:31:39.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.540094522s
Mar 17 12:31:40.793: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.529866202s
Mar 17 12:31:41.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 526.895521ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-v2qwq
Mar 17 12:31:42.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v2qwq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Mar 17 12:31:43.258: INFO: stderr: ""
Mar 17 12:31:43.258: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Mar 17 12:31:43.258: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Mar 17 12:31:43.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v2qwq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Mar 17 12:31:43.492: INFO: stderr: ""
Mar 17 12:31:43.492: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Mar 17 12:31:43.492: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Mar 17 12:31:43.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v2qwq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Mar 17 12:31:43.927: INFO: stderr: ""
Mar 17 12:31:43.927: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Mar 17 12:31:43.927: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Mar 17 12:31:43.927: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Mar 17 12:32:14.203: INFO: Deleting all statefulset in ns e2e-tests-statefulset-v2qwq
Mar 17 12:32:14.206: INFO: Scaling statefulset ss to 0
Mar 17 12:32:14.213: INFO: Waiting for statefulset status.replicas updated to 0
Mar 17 12:32:14.215: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:32:14.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-v2qwq" for this suite.
Mar 17 12:32:22.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:32:22.553: INFO: namespace: e2e-tests-statefulset-v2qwq, resource: bindings, ignored listing per whitelist
Mar 17 12:32:22.584: INFO: namespace e2e-tests-statefulset-v2qwq deletion completed in 8.342724645s

• [SLOW TEST:113.055 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:32:22.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Mar 17 12:32:22.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-qj8m7'
Mar 17 12:32:25.774: INFO: stderr: ""
Mar 17 12:32:25.774: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Mar 17 12:32:25.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qj8m7'
Mar 17 12:32:37.938: INFO: stderr: ""
Mar 17 12:32:37.938: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:32:37.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qj8m7" for this suite.
Mar 17 12:32:43.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:32:43.985: INFO: namespace: e2e-tests-kubectl-qj8m7, resource: bindings, ignored listing per whitelist
Mar 17 12:32:44.203: INFO: namespace e2e-tests-kubectl-qj8m7 deletion completed in 6.257122489s

• [SLOW TEST:21.620 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:32:44.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:32:44.450: INFO: Creating ReplicaSet my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009
Mar 17 12:32:44.534: INFO: Pod name my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009: Found 0 pods out of 1
Mar 17 12:32:49.541: INFO: Pod name my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009: Found 1 pods out of 1
Mar 17 12:32:49.541: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009" is running
Mar 17 12:32:51.550: INFO: Pod "my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009-l29mq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-03-17 12:32:44 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-03-17 12:32:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-03-17 12:32:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-03-17 12:32:44 +0000 UTC Reason: Message:}])
Mar 17 12:32:51.550: INFO: Trying to dial the pod
Mar 17 12:32:56.570: INFO: Controller my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009: Got expected result from replica 1 [my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009-l29mq]: "my-hostname-basic-c3988a77-48b0-11e9-bf64-0242ac110009-l29mq", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:32:56.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-zfnp6" for this suite.
Mar 17 12:33:02.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:33:02.702: INFO: namespace: e2e-tests-replicaset-zfnp6, resource: bindings, ignored listing per whitelist
Mar 17 12:33:02.722: INFO: namespace e2e-tests-replicaset-zfnp6 deletion completed in 6.136061723s

• [SLOW TEST:18.519 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:33:02.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 17 12:33:03.024: INFO: Waiting up to 5m0s for pod "pod-cea988bf-48b0-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-6dmb6" to be "success or failure"
Mar 17 12:33:03.203: INFO: Pod "pod-cea988bf-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 178.394688ms
Mar 17 12:33:05.207: INFO: Pod "pod-cea988bf-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182705913s
Mar 17 12:33:07.213: INFO: Pod "pod-cea988bf-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188611534s
Mar 17 12:33:09.351: INFO: Pod "pod-cea988bf-48b0-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.326525716s
STEP: Saw pod success
Mar 17 12:33:09.351: INFO: Pod "pod-cea988bf-48b0-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:33:09.356: INFO: Trying to get logs from node kube pod pod-cea988bf-48b0-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:33:09.914: INFO: Waiting for pod pod-cea988bf-48b0-11e9-bf64-0242ac110009 to disappear
Mar 17 12:33:10.287: INFO: Pod pod-cea988bf-48b0-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:33:10.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6dmb6" for this suite.
Mar 17 12:33:18.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:33:18.674: INFO: namespace: e2e-tests-emptydir-6dmb6, resource: bindings, ignored listing per whitelist
Mar 17 12:33:18.842: INFO: namespace e2e-tests-emptydir-6dmb6 deletion completed in 8.459142297s

• [SLOW TEST:16.120 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:33:18.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-d841f9d3-48b0-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 12:33:19.338: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009" in namespace "e2e-tests-configmap-mgkzk" to be "success or failure"
Mar 17 12:33:19.351: INFO: Pod "pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.802997ms
Mar 17 12:33:21.354: INFO: Pod "pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016426301s
Mar 17 12:33:23.603: INFO: Pod "pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265027033s
Mar 17 12:33:25.651: INFO: Pod "pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312984454s
Mar 17 12:33:27.902: INFO: Pod "pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.564590628s
STEP: Saw pod success
Mar 17 12:33:27.902: INFO: Pod "pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:33:27.905: INFO: Trying to get logs from node kube pod pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Mar 17 12:33:28.274: INFO: Waiting for pod pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009 to disappear
Mar 17 12:33:28.303: INFO: Pod pod-configmaps-d8435a7b-48b0-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:33:28.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mgkzk" for this suite.
Mar 17 12:33:34.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:33:34.381: INFO: namespace: e2e-tests-configmap-mgkzk, resource: bindings, ignored listing per whitelist
Mar 17 12:33:34.428: INFO: namespace e2e-tests-configmap-mgkzk deletion completed in 6.121187465s

• [SLOW TEST:15.586 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:33:34.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:33:34.599: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Mar 17 12:33:34.615: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jz4hc/daemonsets","resourceVersion":"1299241"},"items":null}

Mar 17 12:33:34.628: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jz4hc/pods","resourceVersion":"1299241"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:33:34.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-jz4hc" for this suite.
Mar 17 12:33:40.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:33:40.687: INFO: namespace: e2e-tests-daemonsets-jz4hc, resource: bindings, ignored listing per whitelist
Mar 17 12:33:40.729: INFO: namespace e2e-tests-daemonsets-jz4hc deletion completed in 6.093887205s

S [SKIPPING] [6.301 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Mar 17 12:33:34.599: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:33:40.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e5566570-48b0-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume configMaps
Mar 17 12:33:41.102: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009" in namespace "e2e-tests-projected-7dprj" to be "success or failure"
Mar 17 12:33:41.142: INFO: Pod "pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 39.820897ms
Mar 17 12:33:43.145: INFO: Pod "pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042338542s
Mar 17 12:33:45.147: INFO: Pod "pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044976242s
Mar 17 12:33:47.150: INFO: Pod "pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047728182s
STEP: Saw pod success
Mar 17 12:33:47.150: INFO: Pod "pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:33:47.152: INFO: Trying to get logs from node kube pod pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 17 12:33:47.308: INFO: Waiting for pod pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009 to disappear
Mar 17 12:33:47.323: INFO: Pod pod-projected-configmaps-e55ad715-48b0-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:33:47.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7dprj" for this suite.
Mar 17 12:33:55.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:33:55.659: INFO: namespace: e2e-tests-projected-7dprj, resource: bindings, ignored listing per whitelist
Mar 17 12:33:55.779: INFO: namespace e2e-tests-projected-7dprj deletion completed in 8.453158937s

• [SLOW TEST:15.049 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:33:55.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Mar 17 12:33:55.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fjdw9'
Mar 17 12:33:56.491: INFO: stderr: ""
Mar 17 12:33:56.491: INFO: stdout: "pod/pause created\n"
Mar 17 12:33:56.491: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Mar 17 12:33:56.491: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-fjdw9" to be "running and ready"
Mar 17 12:33:56.523: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 32.293994ms
Mar 17 12:33:58.527: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035925284s
Mar 17 12:34:00.530: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038787345s
Mar 17 12:34:02.533: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.042330346s
Mar 17 12:34:02.533: INFO: Pod "pause" satisfied condition "running and ready"
Mar 17 12:34:02.533: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Mar 17 12:34:02.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-fjdw9'
Mar 17 12:34:02.602: INFO: stderr: ""
Mar 17 12:34:02.602: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Mar 17 12:34:02.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-fjdw9'
Mar 17 12:34:02.721: INFO: stderr: ""
Mar 17 12:34:02.721: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Mar 17 12:34:02.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-fjdw9'
Mar 17 12:34:02.801: INFO: stderr: ""
Mar 17 12:34:02.801: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Mar 17 12:34:02.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-fjdw9'
Mar 17 12:34:03.030: INFO: stderr: ""
Mar 17 12:34:03.030: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Mar 17 12:34:03.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fjdw9'
Mar 17 12:34:03.276: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 17 12:34:03.276: INFO: stdout: "pod \"pause\" force deleted\n"
Mar 17 12:34:03.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-fjdw9'
Mar 17 12:34:03.647: INFO: stderr: "No resources found.\n"
Mar 17 12:34:03.647: INFO: stdout: ""
Mar 17 12:34:03.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-fjdw9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 17 12:34:03.717: INFO: stderr: ""
Mar 17 12:34:03.717: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:34:03.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fjdw9" for this suite.
Mar 17 12:34:09.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:34:09.940: INFO: namespace: e2e-tests-kubectl-fjdw9, resource: bindings, ignored listing per whitelist
Mar 17 12:34:09.966: INFO: namespace e2e-tests-kubectl-fjdw9 deletion completed in 6.23927862s

• [SLOW TEST:14.186 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:34:09.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-f6ae7cd8-48b0-11e9-bf64-0242ac110009
STEP: Creating configMap with name cm-test-opt-upd-f6ae7d11-48b0-11e9-bf64-0242ac110009
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f6ae7cd8-48b0-11e9-bf64-0242ac110009
STEP: Updating configmap cm-test-opt-upd-f6ae7d11-48b0-11e9-bf64-0242ac110009
STEP: Creating configMap with name cm-test-opt-create-f6ae7d22-48b0-11e9-bf64-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:34:24.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7rg4z" for this suite.
Mar 17 12:34:48.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:34:48.500: INFO: namespace: e2e-tests-projected-7rg4z, resource: bindings, ignored listing per whitelist
Mar 17 12:34:48.532: INFO: namespace e2e-tests-projected-7rg4z deletion completed in 24.084109866s

• [SLOW TEST:38.567 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:34:48.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 12:34:48.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dae5be5-48b1-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-prfgv" to be "success or failure"
Mar 17 12:34:48.767: INFO: Pod "downwardapi-volume-0dae5be5-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.352405ms
Mar 17 12:34:50.832: INFO: Pod "downwardapi-volume-0dae5be5-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07335465s
Mar 17 12:34:52.836: INFO: Pod "downwardapi-volume-0dae5be5-48b1-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076581775s
STEP: Saw pod success
Mar 17 12:34:52.836: INFO: Pod "downwardapi-volume-0dae5be5-48b1-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:34:52.838: INFO: Trying to get logs from node kube pod downwardapi-volume-0dae5be5-48b1-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 12:34:52.881: INFO: Waiting for pod downwardapi-volume-0dae5be5-48b1-11e9-bf64-0242ac110009 to disappear
Mar 17 12:34:52.981: INFO: Pod downwardapi-volume-0dae5be5-48b1-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:34:52.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-prfgv" for this suite.
Mar 17 12:34:59.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:34:59.036: INFO: namespace: e2e-tests-downward-api-prfgv, resource: bindings, ignored listing per whitelist
Mar 17 12:34:59.078: INFO: namespace e2e-tests-downward-api-prfgv deletion completed in 6.093242355s

• [SLOW TEST:10.545 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:34:59.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 17 12:34:59.344: INFO: Number of nodes with available pods: 0
Mar 17 12:34:59.344: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:00.352: INFO: Number of nodes with available pods: 0
Mar 17 12:35:00.352: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:01.351: INFO: Number of nodes with available pods: 0
Mar 17 12:35:01.351: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:02.417: INFO: Number of nodes with available pods: 1
Mar 17 12:35:02.417: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 17 12:35:02.509: INFO: Number of nodes with available pods: 0
Mar 17 12:35:02.509: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:03.637: INFO: Number of nodes with available pods: 0
Mar 17 12:35:03.637: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:06.074: INFO: Number of nodes with available pods: 0
Mar 17 12:35:06.074: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:06.755: INFO: Number of nodes with available pods: 0
Mar 17 12:35:06.755: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:07.637: INFO: Number of nodes with available pods: 0
Mar 17 12:35:07.637: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:08.580: INFO: Number of nodes with available pods: 0
Mar 17 12:35:08.580: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:09.517: INFO: Number of nodes with available pods: 0
Mar 17 12:35:09.517: INFO: Node kube is running more than one daemon pod
Mar 17 12:35:10.515: INFO: Number of nodes with available pods: 1
Mar 17 12:35:10.515: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-84nkn, will wait for the garbage collector to delete the pods
Mar 17 12:35:10.586: INFO: Deleting DaemonSet.extensions daemon-set took: 12.322196ms
Mar 17 12:35:10.686: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.203778ms
Mar 17 12:35:48.121: INFO: Number of nodes with available pods: 0
Mar 17 12:35:48.121: INFO: Number of running nodes: 0, number of available pods: 0
Mar 17 12:35:48.125: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-84nkn/daemonsets","resourceVersion":"1299566"},"items":null}

Mar 17 12:35:48.128: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-84nkn/pods","resourceVersion":"1299566"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:35:48.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-84nkn" for this suite.
Mar 17 12:35:54.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:35:54.178: INFO: namespace: e2e-tests-daemonsets-84nkn, resource: bindings, ignored listing per whitelist
Mar 17 12:35:54.243: INFO: namespace e2e-tests-daemonsets-84nkn deletion completed in 6.105686947s

• [SLOW TEST:55.165 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:35:54.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-34dbeee5-48b1-11e9-bf64-0242ac110009
STEP: Creating a pod to test consume secrets
Mar 17 12:35:54.647: INFO: Waiting up to 5m0s for pod "pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009" in namespace "e2e-tests-secrets-7lnp5" to be "success or failure"
Mar 17 12:35:54.663: INFO: Pod "pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.573704ms
Mar 17 12:35:56.666: INFO: Pod "pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019636766s
Mar 17 12:35:58.839: INFO: Pod "pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192437451s
Mar 17 12:36:00.949: INFO: Pod "pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.301773739s
STEP: Saw pod success
Mar 17 12:36:00.949: INFO: Pod "pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:36:01.027: INFO: Trying to get logs from node kube pod pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009 container secret-env-test: 
STEP: delete the pod
Mar 17 12:36:01.376: INFO: Waiting for pod pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009 to disappear
Mar 17 12:36:01.436: INFO: Pod pod-secrets-34e41d1e-48b1-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:36:01.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7lnp5" for this suite.
Mar 17 12:36:07.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:36:07.757: INFO: namespace: e2e-tests-secrets-7lnp5, resource: bindings, ignored listing per whitelist
Mar 17 12:36:07.870: INFO: namespace e2e-tests-secrets-7lnp5 deletion completed in 6.430151483s

• [SLOW TEST:13.627 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:36:07.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Mar 17 12:36:22.148: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:22.148: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:22.287: INFO: Exec stderr: ""
Mar 17 12:36:22.287: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:22.287: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:22.429: INFO: Exec stderr: ""
Mar 17 12:36:22.429: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:22.430: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:22.599: INFO: Exec stderr: ""
Mar 17 12:36:22.599: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:22.599: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:22.703: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Mar 17 12:36:22.703: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:22.703: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:22.818: INFO: Exec stderr: ""
Mar 17 12:36:22.818: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:22.818: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:22.925: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Mar 17 12:36:22.925: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:22.925: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:23.046: INFO: Exec stderr: ""
Mar 17 12:36:23.046: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:23.046: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:23.160: INFO: Exec stderr: ""
Mar 17 12:36:23.160: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:23.160: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:23.299: INFO: Exec stderr: ""
Mar 17 12:36:23.299: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6xtwh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:36:23.299: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:36:23.443: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:36:23.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-6xtwh" for this suite.
Mar 17 12:37:11.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:37:11.547: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-6xtwh, resource: bindings, ignored listing per whitelist
Mar 17 12:37:11.614: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-6xtwh deletion completed in 48.167460757s

• [SLOW TEST:63.744 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:37:11.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:37:11.813: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:37:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-7gxjx" for this suite.
Mar 17 12:37:19.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:37:19.115: INFO: namespace: e2e-tests-custom-resource-definition-7gxjx, resource: bindings, ignored listing per whitelist
Mar 17 12:37:19.169: INFO: namespace e2e-tests-custom-resource-definition-7gxjx deletion completed in 6.24032209s

• [SLOW TEST:7.555 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:37:19.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Mar 17 12:37:20.010: INFO: Waiting up to 5m0s for pod "downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-lwjww" to be "success or failure"
Mar 17 12:37:20.330: INFO: Pod "downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 319.773221ms
Mar 17 12:37:22.333: INFO: Pod "downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322882967s
Mar 17 12:37:24.336: INFO: Pod "downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325660857s
Mar 17 12:37:26.338: INFO: Pod "downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.327910581s
STEP: Saw pod success
Mar 17 12:37:26.338: INFO: Pod "downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:37:26.340: INFO: Trying to get logs from node kube pod downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009 container dapi-container: 
STEP: delete the pod
Mar 17 12:37:29.374: INFO: Waiting for pod downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009 to disappear
Mar 17 12:37:29.607: INFO: Pod downward-api-67d3f9f0-48b1-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:37:29.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lwjww" for this suite.
Mar 17 12:37:35.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:37:36.009: INFO: namespace: e2e-tests-downward-api-lwjww, resource: bindings, ignored listing per whitelist
Mar 17 12:37:36.193: INFO: namespace e2e-tests-downward-api-lwjww deletion completed in 6.582727591s

• [SLOW TEST:17.024 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:37:36.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Mar 17 12:37:36.574: INFO: namespace e2e-tests-kubectl-mq4z4
Mar 17 12:37:36.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mq4z4'
Mar 17 12:37:37.103: INFO: stderr: ""
Mar 17 12:37:37.103: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Mar 17 12:37:38.445: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 12:37:38.445: INFO: Found 0 / 1
Mar 17 12:37:39.190: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 12:37:39.190: INFO: Found 0 / 1
Mar 17 12:37:40.108: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 12:37:40.108: INFO: Found 0 / 1
Mar 17 12:37:41.191: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 12:37:41.191: INFO: Found 0 / 1
Mar 17 12:37:42.107: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 12:37:42.107: INFO: Found 0 / 1
Mar 17 12:37:43.107: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 12:37:43.107: INFO: Found 0 / 1
Mar 17 12:37:44.106: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 12:37:44.106: INFO: Found 1 / 1
Mar 17 12:37:44.106: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Mar 17 12:37:44.109: INFO: Selector matched 1 pods for map[app:redis]
Mar 17 12:37:44.109: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 17 12:37:44.109: INFO: wait on redis-master startup in e2e-tests-kubectl-mq4z4 
Mar 17 12:37:44.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xkx4q redis-master --namespace=e2e-tests-kubectl-mq4z4'
Mar 17 12:37:44.197: INFO: stderr: ""
Mar 17 12:37:44.197: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Mar 12:37:42.382 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Mar 12:37:42.382 # Server started, Redis version 3.2.12\n1:M 17 Mar 12:37:42.382 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Mar 12:37:42.382 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Mar 17 12:37:44.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-mq4z4'
Mar 17 12:37:44.422: INFO: stderr: ""
Mar 17 12:37:44.422: INFO: stdout: "service/rm2 exposed\n"
Mar 17 12:37:44.425: INFO: Service rm2 in namespace e2e-tests-kubectl-mq4z4 found.
STEP: exposing service
Mar 17 12:37:46.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-mq4z4'
Mar 17 12:37:46.963: INFO: stderr: ""
Mar 17 12:37:46.963: INFO: stdout: "service/rm3 exposed\n"
Mar 17 12:37:47.194: INFO: Service rm3 in namespace e2e-tests-kubectl-mq4z4 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:37:49.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mq4z4" for this suite.
Mar 17 12:38:11.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:38:11.318: INFO: namespace: e2e-tests-kubectl-mq4z4, resource: bindings, ignored listing per whitelist
Mar 17 12:38:11.379: INFO: namespace e2e-tests-kubectl-mq4z4 deletion completed in 22.177165668s

• [SLOW TEST:35.186 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:38:11.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Mar 17 12:38:11.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86a1468e-48b1-11e9-bf64-0242ac110009" in namespace "e2e-tests-downward-api-5zwxk" to be "success or failure"
Mar 17 12:38:11.914: INFO: Pod "downwardapi-volume-86a1468e-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.095489ms
Mar 17 12:38:14.053: INFO: Pod "downwardapi-volume-86a1468e-48b1-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149798796s
Mar 17 12:38:16.056: INFO: Pod "downwardapi-volume-86a1468e-48b1-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152469991s
STEP: Saw pod success
Mar 17 12:38:16.056: INFO: Pod "downwardapi-volume-86a1468e-48b1-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:38:16.057: INFO: Trying to get logs from node kube pod downwardapi-volume-86a1468e-48b1-11e9-bf64-0242ac110009 container client-container: 
STEP: delete the pod
Mar 17 12:38:16.085: INFO: Waiting for pod downwardapi-volume-86a1468e-48b1-11e9-bf64-0242ac110009 to disappear
Mar 17 12:38:16.106: INFO: Pod downwardapi-volume-86a1468e-48b1-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:38:16.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5zwxk" for this suite.
Mar 17 12:38:22.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:38:22.212: INFO: namespace: e2e-tests-downward-api-5zwxk, resource: bindings, ignored listing per whitelist
Mar 17 12:38:22.259: INFO: namespace e2e-tests-downward-api-5zwxk deletion completed in 6.150210382s

• [SLOW TEST:10.879 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:38:22.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lqhst
Mar 17 12:38:28.784: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lqhst
STEP: checking the pod's current state and verifying that restartCount is present
Mar 17 12:38:28.789: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:42:28.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-lqhst" for this suite.
Mar 17 12:42:37.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:42:37.369: INFO: namespace: e2e-tests-container-probe-lqhst, resource: bindings, ignored listing per whitelist
Mar 17 12:42:37.430: INFO: namespace e2e-tests-container-probe-lqhst deletion completed in 8.445613008s

• [SLOW TEST:255.171 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:42:37.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 17 12:42:37.951: INFO: Waiting up to 5m0s for pod "pod-253e3dff-48b2-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-x9r77" to be "success or failure"
Mar 17 12:42:37.974: INFO: Pod "pod-253e3dff-48b2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 23.474116ms
Mar 17 12:42:39.981: INFO: Pod "pod-253e3dff-48b2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02962501s
Mar 17 12:42:42.003: INFO: Pod "pod-253e3dff-48b2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051594035s
Mar 17 12:42:44.006: INFO: Pod "pod-253e3dff-48b2-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054536119s
STEP: Saw pod success
Mar 17 12:42:44.006: INFO: Pod "pod-253e3dff-48b2-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:42:44.007: INFO: Trying to get logs from node kube pod pod-253e3dff-48b2-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:42:44.032: INFO: Waiting for pod pod-253e3dff-48b2-11e9-bf64-0242ac110009 to disappear
Mar 17 12:42:44.145: INFO: Pod pod-253e3dff-48b2-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:42:44.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-x9r77" for this suite.
Mar 17 12:42:50.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:42:50.199: INFO: namespace: e2e-tests-emptydir-x9r77, resource: bindings, ignored listing per whitelist
Mar 17 12:42:50.276: INFO: namespace e2e-tests-emptydir-x9r77 deletion completed in 6.128573801s

• [SLOW TEST:12.846 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:42:50.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 17 12:42:50.533: INFO: Waiting up to 5m0s for pod "pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009" in namespace "e2e-tests-emptydir-slmk2" to be "success or failure"
Mar 17 12:42:50.554: INFO: Pod "pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 20.114303ms
Mar 17 12:42:52.559: INFO: Pod "pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024966429s
Mar 17 12:42:54.564: INFO: Pod "pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030185775s
Mar 17 12:42:56.567: INFO: Pod "pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033805035s
STEP: Saw pod success
Mar 17 12:42:56.567: INFO: Pod "pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009" satisfied condition "success or failure"
Mar 17 12:42:56.571: INFO: Trying to get logs from node kube pod pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009 container test-container: 
STEP: delete the pod
Mar 17 12:42:56.602: INFO: Waiting for pod pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009 to disappear
Mar 17 12:42:56.609: INFO: Pod pod-2cd4d0e2-48b2-11e9-bf64-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:42:56.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-slmk2" for this suite.
Mar 17 12:43:04.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:43:04.841: INFO: namespace: e2e-tests-emptydir-slmk2, resource: bindings, ignored listing per whitelist
Mar 17 12:43:04.888: INFO: namespace e2e-tests-emptydir-slmk2 deletion completed in 8.274350276s

• [SLOW TEST:14.611 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:43:04.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r4qqh
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 17 12:43:05.191: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 17 12:43:31.404: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r4qqh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 17 12:43:31.404: INFO: >>> kubeConfig: /root/.kube/config
Mar 17 12:43:32.664: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:43:32.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-r4qqh" for this suite.
Mar 17 12:43:55.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:43:55.079: INFO: namespace: e2e-tests-pod-network-test-r4qqh, resource: bindings, ignored listing per whitelist
Mar 17 12:43:55.104: INFO: namespace e2e-tests-pod-network-test-r4qqh deletion completed in 22.437256716s

• [SLOW TEST:50.216 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Mar 17 12:43:55.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Mar 17 12:43:55.387: INFO: Pod name rollover-pod: Found 0 pods out of 1
Mar 17 12:44:00.390: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 17 12:44:00.390: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Mar 17 12:44:02.393: INFO: Creating deployment "test-rollover-deployment"
Mar 17 12:44:02.408: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Mar 17 12:44:04.413: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Mar 17 12:44:04.417: INFO: Ensure that both replica sets have 1 created replica
Mar 17 12:44:04.421: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Mar 17 12:44:04.426: INFO: Updating deployment test-rollover-deployment
Mar 17 12:44:04.426: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Mar 17 12:44:08.271: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Mar 17 12:44:08.644: INFO: Make sure deployment "test-rollover-deployment" is complete
Mar 17 12:44:08.912: INFO: all replica sets need to contain the pod-template-hash label
Mar 17 12:44:08.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423445, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 17 12:44:10.919: INFO: all replica sets need to contain the pod-template-hash label
Mar 17 12:44:10.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423445, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 17 12:44:12.918: INFO: all replica sets need to contain the pod-template-hash label
Mar 17 12:44:12.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423451, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 17 12:44:14.917: INFO: all replica sets need to contain the pod-template-hash label
Mar 17 12:44:14.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423451, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 17 12:44:16.927: INFO: all replica sets need to contain the pod-template-hash label
Mar 17 12:44:16.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423451, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 17 12:44:19.023: INFO: all replica sets need to contain the pod-template-hash label
Mar 17 12:44:19.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423451, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 17 12:44:20.917: INFO: all replica sets need to contain the pod-template-hash label
Mar 17 12:44:20.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423451, loc:(*time.Location)(0x7b13a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63688423442, loc:(*time.Location)(0x7b13a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 17 12:44:23.034: INFO: 
Mar 17 12:44:23.034: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Mar 17 12:44:23.040: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-6n5pt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6n5pt/deployments/test-rollover-deployment,UID:57afbb93-48b2-11e9-a072-fa163e921bae,ResourceVersion:1300589,Generation:2,CreationTimestamp:2019-03-17 12:44:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-03-17 12:44:02 +0000 UTC 2019-03-17 12:44:02 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-03-17 12:44:22 +0000 UTC 2019-03-17 12:44:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-6b7f9d6597" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Mar 17 12:44:23.043: INFO: New ReplicaSet "test-rollover-deployment-6b7f9d6597" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6b7f9d6597,GenerateName:,Namespace:e2e-tests-deployment-6n5pt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6n5pt/replicasets/test-rollover-deployment-6b7f9d6597,UID:58e60129-48b2-11e9-a072-fa163e921bae,ResourceVersion:1300580,Generation:2,CreationTimestamp:2019-03-17 12:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 57afbb93-48b2-11e9-a072-fa163e921bae 0xc0010b62f7 0xc0010b62f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Mar 17 12:44:23.043: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Mar 17 12:44:23.043: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-6n5pt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6n5pt/replicasets/test-rollover-controller,UID:5380af6b-48b2-11e9-a072-fa163e921bae,ResourceVersion:1300588,Generation:2,CreationTimestamp:2019-03-17 12:43:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 57afbb93-48b2-11e9-a072-fa163e921bae 0xc000bcaaa7 0xc000bcaaa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Mar 17 12:44:23.043: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6586df867b,GenerateName:,Namespace:e2e-tests-deployment-6n5pt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6n5pt/replicasets/test-rollover-deployment-6586df867b,UID:57b39dbc-48b2-11e9-a072-fa163e921bae,ResourceVersion:1300547,Generation:2,CreationTimestamp:2019-03-17 12:44:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 57afbb93-48b2-11e9-a072-fa163e921bae 0xc000bcaca7 0xc000bcaca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Mar 17 12:44:23.045: INFO: Pod "test-rollover-deployment-6b7f9d6597-89c5k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6b7f9d6597-89c5k,GenerateName:test-rollover-deployment-6b7f9d6597-,Namespace:e2e-tests-deployment-6n5pt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6n5pt/pods/test-rollover-deployment-6b7f9d6597-89c5k,UID:59263a61-48b2-11e9-a072-fa163e921bae,ResourceVersion:1300565,Generation:0,CreationTimestamp:2019-03-17 12:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-6b7f9d6597 58e60129-48b2-11e9-a072-fa163e921bae 0xc0010b7d77 0xc0010b7d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sr8cv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sr8cv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-sr8cv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kube,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010b7df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010b7ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 12:44:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 12:44:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 12:44:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-03-17 12:44:05 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.7,PodIP:10.32.0.5,StartTime:2019-03-17 12:44:05 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-03-17 12:44:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e72b55c2b5431bfd60f091732be76ca5a904076da7fdf58473f3873c3f8b2474}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Mar 17 12:44:23.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6n5pt" for this suite.
Mar 17 12:44:31.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 17 12:44:31.084: INFO: namespace: e2e-tests-deployment-6n5pt, resource: bindings, ignored listing per whitelist
Mar 17 12:44:31.307: INFO: namespace e2e-tests-deployment-6n5pt deletion completed in 8.259910328s

• [SLOW TEST:36.203 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSMar 17 12:44:31.307: INFO: Running AfterSuite actions on all nodes
Mar 17 12:44:31.307: INFO: Running AfterSuite actions on node 1
Mar 17 12:44:31.307: INFO: Dumping logs locally to: /home/opnfv/functest/results/k8s_conformance
Mar 17 12:44:31.308: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 200 of 2161 Specs in 7002.446 seconds
FAIL! -- 199 Passed | 1 Failed | 0 Pending | 1961 Skipped --- FAIL: TestE2E (7002.62s)
FAIL

2019-03-17 12:44:31,333 - xtesting.ci.run_tests - INFO - Test result:

+-------------------------+------------------+------------------+----------------+
|        TEST CASE        |     PROJECT      |     DURATION     |     RESULT     |
+-------------------------+------------------+------------------+----------------+
|     k8s_conformance     |     functest     |      116:43      |      FAIL      |
+-------------------------+------------------+------------------+----------------+

2019-03-17 12:44:31,335 - xtesting.ci.run_tests - ERROR - The test case 'k8s_conformance' failed.
2019-03-17 12:44:31,335 - xtesting.ci.run_tests - INFO - Execution exit value: Result.EX_ERROR