I0828 03:47:19.789520 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0828 03:47:19.795214 8 e2e.go:109] Starting e2e run "54444fcb-452f-4e1d-8ddd-d4cfa5dbceef" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598586427 - Will randomize all specs Will run 278 of 4844 specs Aug 28 03:47:20.328: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:47:20.381: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 28 03:47:20.554: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 28 03:47:20.724: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 28 03:47:20.725: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 28 03:47:20.725: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 28 03:47:20.775: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 28 03:47:20.775: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 28 03:47:20.775: INFO: e2e test version: v1.17.11 Aug 28 03:47:20.780: INFO: kube-apiserver version: v1.17.5 Aug 28 03:47:20.781: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:47:20.804: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:47:20.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Aug 28 03:47:20.940: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 28 03:47:29.705: INFO: 9 pods remaining Aug 28 03:47:29.706: INFO: 0 pods has nil DeletionTimestamp Aug 28 03:47:29.706: INFO: Aug 28 03:47:30.852: INFO: 0 pods remaining Aug 28 03:47:30.852: INFO: 0 pods has nil DeletionTimestamp Aug 28 03:47:30.852: INFO: Aug 28 03:47:32.167: INFO: 0 pods remaining Aug 28 03:47:32.167: INFO: 0 pods has nil DeletionTimestamp Aug 28 03:47:32.167: INFO: Aug 28 03:47:33.331: INFO: 0 pods remaining Aug 28 03:47:33.331: INFO: 0 pods has nil DeletionTimestamp Aug 28 03:47:33.331: INFO: STEP: Gathering metrics W0828 03:47:34.643456 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 28 03:47:34.645: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:47:34.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4343" for this suite. • [SLOW TEST:14.020 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":1,"skipped":37,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:47:34.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2362 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2362;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2362 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2362;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2362.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2362.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2362.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2362.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2362.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2362.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2362.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2362.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2362.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2362.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2362.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 156.35.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.35.156_udp@PTR;check="$$(dig +tcp +noall +answer +search 156.35.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.35.156_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2362 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2362;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2362 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2362;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2362.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2362.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2362.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2362.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2362.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2362.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2362.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2362.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2362.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2362.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2362.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2362.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 156.35.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.35.156_udp@PTR;check="$$(dig +tcp +noall +answer +search 156.35.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.35.156_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 03:47:43.496: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.503: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.508: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.512: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.516: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.519: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.524: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.530: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.624: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.629: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.633: INFO: Unable to read jessie_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.642: INFO: Unable to read jessie_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.646: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.650: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.655: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:43.678: INFO: Lookups using dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2362 wheezy_tcp@dns-test-service.dns-2362 wheezy_udp@dns-test-service.dns-2362.svc wheezy_tcp@dns-test-service.dns-2362.svc wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2362 jessie_tcp@dns-test-service.dns-2362 jessie_udp@dns-test-service.dns-2362.svc jessie_tcp@dns-test-service.dns-2362.svc jessie_udp@_http._tcp.dns-test-service.dns-2362.svc jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc] Aug 28 03:47:48.686: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.691: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.697: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.701: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.705: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.715: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.747: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.750: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.754: INFO: Unable to read jessie_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.757: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.761: INFO: Unable to read jessie_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.765: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.769: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.773: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:48.795: INFO: Lookups using dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2362 wheezy_tcp@dns-test-service.dns-2362 wheezy_udp@dns-test-service.dns-2362.svc wheezy_tcp@dns-test-service.dns-2362.svc wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2362 jessie_tcp@dns-test-service.dns-2362 jessie_udp@dns-test-service.dns-2362.svc jessie_tcp@dns-test-service.dns-2362.svc jessie_udp@_http._tcp.dns-test-service.dns-2362.svc jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc] Aug 28 03:47:53.685: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.691: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.696: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.700: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.714: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.717: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.769: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.772: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.777: INFO: Unable to read jessie_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.781: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.786: INFO: Unable to read jessie_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.796: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.800: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:53.820: INFO: Lookups using dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2362 wheezy_tcp@dns-test-service.dns-2362 wheezy_udp@dns-test-service.dns-2362.svc wheezy_tcp@dns-test-service.dns-2362.svc wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2362 jessie_tcp@dns-test-service.dns-2362 jessie_udp@dns-test-service.dns-2362.svc jessie_tcp@dns-test-service.dns-2362.svc jessie_udp@_http._tcp.dns-test-service.dns-2362.svc jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc] Aug 28 03:47:58.687: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.692: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.697: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.702: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.713: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.717: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.747: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.750: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.754: INFO: Unable to read jessie_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.757: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.760: INFO: Unable to read jessie_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.763: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.765: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.768: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:47:58.789: INFO: Lookups using dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2362 wheezy_tcp@dns-test-service.dns-2362 wheezy_udp@dns-test-service.dns-2362.svc wheezy_tcp@dns-test-service.dns-2362.svc wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2362 jessie_tcp@dns-test-service.dns-2362 jessie_udp@dns-test-service.dns-2362.svc jessie_tcp@dns-test-service.dns-2362.svc jessie_udp@_http._tcp.dns-test-service.dns-2362.svc jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc] Aug 28 03:48:03.685: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.689: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.701: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.705: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.741: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.746: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.775: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.779: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.783: INFO: Unable to read jessie_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.787: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.792: INFO: Unable to read jessie_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.797: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.802: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.806: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:03.828: INFO: Lookups using dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2362 wheezy_tcp@dns-test-service.dns-2362 wheezy_udp@dns-test-service.dns-2362.svc wheezy_tcp@dns-test-service.dns-2362.svc wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2362 jessie_tcp@dns-test-service.dns-2362 jessie_udp@dns-test-service.dns-2362.svc jessie_tcp@dns-test-service.dns-2362.svc jessie_udp@_http._tcp.dns-test-service.dns-2362.svc jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc] Aug 28 03:48:08.685: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.690: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.710: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.715: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.719: INFO: Unable to read wheezy_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.728: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.732: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.757: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.760: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.763: INFO: Unable to read jessie_udp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.766: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362 from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.770: INFO: Unable to read jessie_udp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.773: INFO: Unable to read jessie_tcp@dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.777: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.780: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc from pod dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9: the server could not find the requested resource (get pods dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9) Aug 28 03:48:08.801: INFO: Lookups using dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2362 wheezy_tcp@dns-test-service.dns-2362 wheezy_udp@dns-test-service.dns-2362.svc wheezy_tcp@dns-test-service.dns-2362.svc wheezy_udp@_http._tcp.dns-test-service.dns-2362.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2362.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2362 jessie_tcp@dns-test-service.dns-2362 jessie_udp@dns-test-service.dns-2362.svc jessie_tcp@dns-test-service.dns-2362.svc jessie_udp@_http._tcp.dns-test-service.dns-2362.svc jessie_tcp@_http._tcp.dns-test-service.dns-2362.svc] Aug 28 03:48:13.942: INFO: DNS probes using dns-2362/dns-test-e7ece45d-1162-40e5-9ac9-f70f8b66e0a9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:48:15.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2362" for this suite. • [SLOW TEST:40.724 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":2,"skipped":47,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:48:15.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 28 03:48:20.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183300, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183300, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183300, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183299, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 03:48:22.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183300, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183300, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183300, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183299, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 03:48:25.252: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 03:48:25.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:48:26.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1945" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.145 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":3,"skipped":63,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:48:26.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 03:48:26.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1" in namespace "projected-3276" to be "success or failure" Aug 28 03:48:26.851: INFO: Pod "downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1": Phase="Pending", Reason="", readiness=false. Elapsed: 25.469768ms Aug 28 03:48:29.257: INFO: Pod "downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432505058s Aug 28 03:48:31.400: INFO: Pod "downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575369531s Aug 28 03:48:33.411: INFO: Pod "downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585625148s STEP: Saw pod success Aug 28 03:48:33.411: INFO: Pod "downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1" satisfied condition "success or failure" Aug 28 03:48:33.417: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1 container client-container: STEP: delete the pod Aug 28 03:48:33.500: INFO: Waiting for pod downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1 to disappear Aug 28 03:48:33.548: INFO: Pod downwardapi-volume-0c12138b-7ec0-4790-907c-7aa545f2e7d1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:48:33.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3276" for this suite. • [SLOW TEST:6.851 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:48:33.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 03:48:33.644: INFO: Creating deployment "webserver-deployment" Aug 28 03:48:33.681: INFO: Waiting for observed generation 1 Aug 28 03:48:35.957: INFO: Waiting for all required pods to come up Aug 28 03:48:36.179: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 28 03:48:46.845: INFO: Waiting for deployment "webserver-deployment" to complete Aug 28 03:48:46.855: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 28 03:48:46.869: INFO: Updating deployment webserver-deployment Aug 28 03:48:46.870: INFO: Waiting for observed generation 2 Aug 28 03:48:48.887: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 28 03:48:49.077: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 28 03:48:49.083: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 28 03:48:49.098: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 28 03:48:49.098: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 28 03:48:49.102: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 28 03:48:49.109: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 28 03:48:49.109: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 28 03:48:49.118: INFO: Updating deployment webserver-deployment Aug 28 03:48:49.118: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 28 03:48:49.587: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 28 03:48:49.839: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 28 03:48:54.011: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6873 /apis/apps/v1/namespaces/deployment-6873/deployments/webserver-deployment 5893cf4e-66ad-4607-b87a-095fdaf1944c 4472584 3 2020-08-28 03:48:33 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400327b1c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-28 03:48:49 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-28 03:48:51 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 28 03:48:54.332: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6873 /apis/apps/v1/namespaces/deployment-6873/replicasets/webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 4472572 3 2020-08-28 03:48:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5893cf4e-66ad-4607-b87a-095fdaf1944c 0x400327b677 0x400327b678}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400327b6e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 28 03:48:54.332: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 28 03:48:54.333: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6873 /apis/apps/v1/namespaces/deployment-6873/replicasets/webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 4472582 3 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5893cf4e-66ad-4607-b87a-095fdaf1944c 0x400327b5b7 0x400327b5b8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400327b618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 28 03:48:55.379: INFO: Pod "webserver-deployment-595b5b9587-2hpxl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2hpxl webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-2hpxl 749e592b-a56b-4985-acf3-4de568e3e032 4472616 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x40032c1f87 0x40032c1f88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.380: INFO: Pod "webserver-deployment-595b5b9587-6cjpr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6cjpr webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-6cjpr 6aa32110-8dbd-43b4-898d-ea3df175e88e 4472414 0 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f820e7 0x4002f820e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.224,StartTime:2020-08-28 03:48:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 03:48:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7d6eb1b18462f626839e9abdd6160c17136715ff64a02559505a3286d0747636,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.381: INFO: Pod "webserver-deployment-595b5b9587-8dk92" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8dk92 webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-8dk92 62456bb1-13a2-4139-ab12-b41727b94c1c 4472395 0 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f82267 0x4002f82268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.223,StartTime:2020-08-28 03:48:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 03:48:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://314aae3bc825e3053a4429acd71f6d89d4a46ce745bc3dd9f2fae69265f76b3f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.382: INFO: Pod "webserver-deployment-595b5b9587-8hgwp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8hgwp webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-8hgwp 270dc47b-c32a-487a-ab26-3b628af97e53 4472588 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f823e7 0x4002f823e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.383: INFO: Pod "webserver-deployment-595b5b9587-8l6nc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8l6nc webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-8l6nc f5dc2f32-28f1-4828-a358-209953fe144c 4472388 0 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f82547 0x4002f82548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.168,StartTime:2020-08-28 03:48:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 03:48:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://955e391916ea1806875b5be6e0cb5560acdb831032766f3c38e4550203bc6306,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.168,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.384: INFO: Pod "webserver-deployment-595b5b9587-9d7vr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9d7vr webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-9d7vr df9c4db5-8bcb-461b-b75c-a9c495778752 4472598 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f826c7 0x4002f826c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.385: INFO: Pod "webserver-deployment-595b5b9587-9kf9l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9kf9l webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-9kf9l 588047d4-9647-4115-848f-446b02d78fbb 4472586 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f82827 0x4002f82828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.386: INFO: Pod "webserver-deployment-595b5b9587-9p7ng" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9p7ng webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-9p7ng 4aff151e-ac02-4d5c-ab96-e5bcdea769f6 4472577 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f82987 0x4002f82988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.387: INFO: Pod "webserver-deployment-595b5b9587-bjpgm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bjpgm webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-bjpgm 61b3856e-ca5b-4478-8c56-68c8b089699e 4472626 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f82ae7 0x4002f82ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.388: INFO: Pod "webserver-deployment-595b5b9587-j9xzf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j9xzf webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-j9xzf 7fa49376-5be1-4632-a6ac-3c3397b35206 4472381 0 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f82c47 0x4002f82c48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.222,StartTime:2020-08-28 03:48:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 03:48:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9db9fed0a1a501a7bf9a77f859507c7f7f873b0d2501ebb65baffb2aa5893bf4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.389: INFO: Pod "webserver-deployment-595b5b9587-jwqbc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jwqbc webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-jwqbc 01f60969-56e3-48b1-a61c-7d1e59e6428f 4472579 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f82dc7 0x4002f82dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.390: INFO: Pod "webserver-deployment-595b5b9587-kcspj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kcspj webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-kcspj a7d5f700-92d0-4ae2-9f63-f88906d39fbe 4472604 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f82f27 0x4002f82f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.391: INFO: Pod "webserver-deployment-595b5b9587-m9pzz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m9pzz webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-m9pzz ab535ac3-b934-43c0-b485-fa6df656d87e 4472433 0 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f83087 0x4002f83088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.225,StartTime:2020-08-28 03:48:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 03:48:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2edfdb6884fc33f362ee6671a455d4b2a6d4fc1b96f4dee9508904ffa24ca03e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.392: INFO: Pod "webserver-deployment-595b5b9587-p9pdx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p9pdx webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-p9pdx 18aed734-f759-4bcf-8bd9-3c1d462e06d5 4472558 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f83207 0x4002f83208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.393: INFO: Pod "webserver-deployment-595b5b9587-r5xcd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r5xcd webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-r5xcd cceef64c-69e7-4342-851f-0190b548a68d 4472590 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f83367 0x4002f83368}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.394: INFO: Pod "webserver-deployment-595b5b9587-rzsgg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rzsgg webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-rzsgg 57f0c6c1-2181-4101-85d0-acf1353524d7 4472600 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f834c7 0x4002f834c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.451: INFO: Pod "webserver-deployment-595b5b9587-s72cs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s72cs webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-s72cs 73962726-89eb-4eea-b9f3-b7bac2b5b344 4472609 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f83627 0x4002f83628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.454: INFO: Pod "webserver-deployment-595b5b9587-vxpm8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vxpm8 webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-vxpm8 acba7446-7f26-4009-a3ab-9673b92c61c3 4472408 0 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f83787 0x4002f83788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.170,StartTime:2020-08-28 03:48:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 03:48:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0ac0cf574b3392f3008c170d27f41abdf1c9d3e45d7ebacbeb1b6e4db97f6d8c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.457: INFO: Pod "webserver-deployment-595b5b9587-zlczc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zlczc webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-zlczc b8f6ae12-910b-4c70-8713-1e1f74902c5d 4472354 0 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f83907 0x4002f83908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.221,StartTime:2020-08-28 03:48:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 03:48:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5bbc92a3f909a6331452009457b135c4991531796519c9e7edf4755b9933e8eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.459: INFO: Pod "webserver-deployment-595b5b9587-zwdmj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zwdmj webserver-deployment-595b5b9587- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-595b5b9587-zwdmj 3fe91bcc-c50b-472e-a608-28510936bb64 4472430 0 2020-08-28 03:48:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3168bb32-1f67-450e-bcbd-2a556ad19165 0x4002f83a87 0x4002f83a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.172,StartTime:2020-08-28 03:48:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 03:48:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f1bb88c3fd06a525634263ac942d80e502a0c85678190ca8e4c18ab09cf2d5df,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.172,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.461: INFO: Pod "webserver-deployment-c7997dcc8-4spqp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4spqp webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-4spqp 4151d6ac-0b00-41ec-b0f5-fdb1160976fa 4472640 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002f83c07 0x4002f83c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.462: INFO: Pod "webserver-deployment-c7997dcc8-96vks" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-96vks webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-96vks 59edbc11-28a1-4f7f-88d0-9b25a3e4ccbb 4472591 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002f83d87 0x4002f83d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.463: INFO: Pod "webserver-deployment-c7997dcc8-9v2dj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9v2dj webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-9v2dj 92eec289-cb5e-4a3d-adbe-62a47068d5fe 4472612 0 2020-08-28 03:48:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002f83f07 0x4002f83f08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.464: INFO: Pod "webserver-deployment-c7997dcc8-gbdpj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gbdpj webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-gbdpj 1e4b72fe-892e-4241-8ef7-d64d64f2e262 4472602 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe0087 0x4002fe0088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.465: INFO: Pod "webserver-deployment-c7997dcc8-gtc6p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gtc6p webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-gtc6p 98bccefc-f17b-4a84-918f-2cff103d2761 4472464 0 2020-08-28 03:48:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe0207 0x4002fe0208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.466: INFO: Pod "webserver-deployment-c7997dcc8-jgvwq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jgvwq webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-jgvwq d14fded2-3d78-4ff7-8c0c-2b5228474d19 4472650 0 2020-08-28 03:48:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe0387 0x4002fe0388}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.226,StartTime:2020-08-28 03:48:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.467: INFO: Pod "webserver-deployment-c7997dcc8-jjxq6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jjxq6 webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-jjxq6 06bf635f-82d3-4720-854c-14348815a3ad 4472621 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe0537 0x4002fe0538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.468: INFO: Pod "webserver-deployment-c7997dcc8-pm945" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pm945 webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-pm945 07afe3c9-f7ad-4ffb-bc26-c56d7b7bbbd9 4472562 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe06b7 0x4002fe06b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.470: INFO: Pod "webserver-deployment-c7997dcc8-s9jgw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s9jgw webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-s9jgw bd4c2a11-4f1d-420e-bcb8-5e3340c0c908 4472487 0 2020-08-28 03:48:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe0837 0x4002fe0838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.471: INFO: Pod "webserver-deployment-c7997dcc8-sxcsg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sxcsg webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-sxcsg 38aca38f-73be-44ec-a281-44487e5abe67 4472633 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe09b7 0x4002fe09b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.472: INFO: Pod "webserver-deployment-c7997dcc8-tdf4p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tdf4p webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-tdf4p fed62489-a10e-4fd4-9dcb-41cce24026ce 4472477 0 2020-08-28 03:48:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe0b37 0x4002fe0b38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-28 03:48:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.473: INFO: Pod "webserver-deployment-c7997dcc8-tlggn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tlggn webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-tlggn 16db2b60-1ede-452e-b44d-8f0cf3e332ff 4472490 0 2020-08-28 03:48:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe0cb7 0x4002fe0cb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 28 03:48:55.474: INFO: Pod "webserver-deployment-c7997dcc8-vq877" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vq877 webserver-deployment-c7997dcc8- deployment-6873 /api/v1/namespaces/deployment-6873/pods/webserver-deployment-c7997dcc8-vq877 ed9e0a1c-d2fd-4ae4-a2cc-daa407be41c9 4472597 0 2020-08-28 03:48:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 59520845-5a12-4567-8a97-b13682aa0ae3 0x4002fe0e37 0x4002fe0e38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c86cp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c86cp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c86cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 03:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 03:48:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:48:55.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6873" for this suite. • [SLOW TEST:23.950 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":5,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:48:57.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9789 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Aug 28 03:48:59.657: INFO: Found 0 stateful pods, waiting for 3 Aug 28 03:49:10.563: INFO: Found 1 stateful pods, waiting for 3 Aug 28 03:49:19.810: INFO: Found 2 stateful pods, waiting for 3 Aug 28 03:49:29.847: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:49:29.847: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:49:29.847: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 28 03:49:40.069: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:49:40.069: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:49:40.069: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 28 03:49:40.374: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 28 03:49:50.428: INFO: Updating stateful set ss2 Aug 28 03:49:50.447: INFO: Waiting for Pod statefulset-9789/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 28 03:50:00.457: INFO: Waiting for Pod statefulset-9789/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 28 03:50:10.816: INFO: Found 2 stateful pods, waiting for 3 Aug 28 03:50:20.823: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:50:20.823: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:50:20.823: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 28 03:50:20.854: INFO: Updating stateful set ss2 Aug 28 03:50:21.029: INFO: Waiting for Pod statefulset-9789/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 28 03:50:31.051: INFO: Waiting for Pod statefulset-9789/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 28 03:50:41.060: INFO: Updating stateful set ss2 Aug 28 03:50:41.383: INFO: Waiting for StatefulSet statefulset-9789/ss2 to complete update Aug 28 03:50:41.383: INFO: Waiting for Pod statefulset-9789/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 28 03:50:51.395: INFO: Deleting all statefulset in ns statefulset-9789 Aug 28 03:50:51.400: INFO: Scaling statefulset ss2 to 0 Aug 28 03:51:01.430: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 03:51:01.434: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:51:01.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9789" for this suite. • [SLOW TEST:123.968 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":6,"skipped":140,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:51:01.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 28 03:51:01.655: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:01.667: INFO: Number of nodes with available pods: 0 Aug 28 03:51:01.667: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:02.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:02.684: INFO: Number of nodes with available pods: 0 Aug 28 03:51:02.684: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:03.696: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:04.246: INFO: Number of nodes with available pods: 0 Aug 28 03:51:04.246: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:04.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:04.683: INFO: Number of nodes with available pods: 0 Aug 28 03:51:04.683: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:05.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:05.682: INFO: Number of nodes with available pods: 1 Aug 28 03:51:05.683: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:06.679: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:06.707: INFO: Number of nodes with available pods: 1 Aug 28 03:51:06.707: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:07.673: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:07.677: INFO: Number of nodes with available pods: 2 Aug 28 03:51:07.677: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 28 03:51:07.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:07.718: INFO: Number of nodes with available pods: 1 Aug 28 03:51:07.718: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:08.728: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:08.732: INFO: Number of nodes with available pods: 1 Aug 28 03:51:08.732: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:09.729: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:09.734: INFO: Number of nodes with available pods: 1 Aug 28 03:51:09.734: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:10.727: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:10.731: INFO: Number of nodes with available pods: 1 Aug 28 03:51:10.732: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:11.749: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:11.782: INFO: Number of nodes with available pods: 1 Aug 28 03:51:11.782: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:12.725: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:12.728: INFO: Number of nodes with available pods: 1 Aug 28 03:51:12.729: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:14.315: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:14.320: INFO: Number of nodes with available pods: 1 Aug 28 03:51:14.320: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:14.749: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:14.927: INFO: Number of nodes with available pods: 1 Aug 28 03:51:14.927: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:16.076: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:16.347: INFO: Number of nodes with available pods: 1 Aug 28 03:51:16.348: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:16.798: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:16.893: INFO: Number of nodes with available pods: 1 Aug 28 03:51:16.893: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:18.477: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:18.766: INFO: Number of nodes with available pods: 1 Aug 28 03:51:18.766: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:19.727: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:19.733: INFO: Number of nodes with available pods: 1 Aug 28 03:51:19.733: INFO: Node jerma-worker is running more than one daemon pod Aug 28 03:51:20.809: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 03:51:20.873: INFO: Number of nodes with available pods: 2 Aug 28 03:51:20.873: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5493, will wait for the garbage collector to delete the pods Aug 28 03:51:20.958: INFO: Deleting DaemonSet.extensions daemon-set took: 18.609248ms Aug 28 03:51:21.360: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.929498ms Aug 28 03:51:31.765: INFO: Number of nodes with available pods: 0 Aug 28 03:51:31.765: INFO: Number of running nodes: 0, number of available pods: 0 Aug 28 03:51:31.788: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5493/daemonsets","resourceVersion":"4474016"},"items":null} Aug 28 03:51:31.793: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5493/pods","resourceVersion":"4474016"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:51:31.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5493" for this suite. • [SLOW TEST:30.324 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":7,"skipped":144,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:51:31.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 28 03:51:31.980: INFO: Waiting up to 5m0s for pod "pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed" in namespace "emptydir-7728" to be "success or failure" Aug 28 03:51:31.989: INFO: Pod "pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.964092ms Aug 28 03:51:34.125: INFO: Pod "pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144460257s Aug 28 03:51:36.432: INFO: Pod "pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451317739s Aug 28 03:51:38.460: INFO: Pod "pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478993164s Aug 28 03:51:40.465: INFO: Pod "pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.484593587s STEP: Saw pod success Aug 28 03:51:40.465: INFO: Pod "pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed" satisfied condition "success or failure" Aug 28 03:51:40.469: INFO: Trying to get logs from node jerma-worker pod pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed container test-container: STEP: delete the pod Aug 28 03:51:40.521: INFO: Waiting for pod pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed to disappear Aug 28 03:51:40.533: INFO: Pod pod-dcaa10f3-3bc4-4f17-a2b8-bcdd1b1316ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:51:40.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7728" for this suite. • [SLOW TEST:8.731 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:51:40.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-9ffn STEP: Creating a pod to test atomic-volume-subpath Aug 28 03:51:40.672: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9ffn" in namespace "subpath-1908" to be "success or failure" Aug 28 03:51:40.684: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Pending", Reason="", readiness=false. Elapsed: 11.678449ms Aug 28 03:51:42.689: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017348345s Aug 28 03:51:44.695: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 4.022888278s Aug 28 03:51:46.725: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 6.053000955s Aug 28 03:51:48.732: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 8.059560022s Aug 28 03:51:50.739: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 10.067254779s Aug 28 03:51:53.097: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 12.42449304s Aug 28 03:51:55.102: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 14.429975346s Aug 28 03:51:57.108: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 16.43546761s Aug 28 03:51:59.114: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 18.441562705s Aug 28 03:52:01.120: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 20.447602073s Aug 28 03:52:03.659: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Running", Reason="", readiness=true. Elapsed: 22.987015647s Aug 28 03:52:05.666: INFO: Pod "pod-subpath-test-secret-9ffn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.993822855s STEP: Saw pod success Aug 28 03:52:05.666: INFO: Pod "pod-subpath-test-secret-9ffn" satisfied condition "success or failure" Aug 28 03:52:05.670: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-9ffn container test-container-subpath-secret-9ffn: STEP: delete the pod Aug 28 03:52:06.012: INFO: Waiting for pod pod-subpath-test-secret-9ffn to disappear Aug 28 03:52:06.053: INFO: Pod pod-subpath-test-secret-9ffn no longer exists STEP: Deleting pod pod-subpath-test-secret-9ffn Aug 28 03:52:06.054: INFO: Deleting pod "pod-subpath-test-secret-9ffn" in namespace "subpath-1908" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:52:06.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1908" for this suite. • [SLOW TEST:25.523 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":9,"skipped":200,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:52:06.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-474a4613-8259-4311-b612-b6207b442f00 in namespace container-probe-6930 Aug 28 03:52:12.603: INFO: Started pod busybox-474a4613-8259-4311-b612-b6207b442f00 in namespace container-probe-6930 STEP: checking the pod's current state and verifying that restartCount is present Aug 28 03:52:12.610: INFO: Initial restart count of pod busybox-474a4613-8259-4311-b612-b6207b442f00 is 0 Aug 28 03:53:01.647: INFO: Restart count of pod container-probe-6930/busybox-474a4613-8259-4311-b612-b6207b442f00 is now 1 (49.036776105s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:53:01.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6930" for this suite. • [SLOW TEST:55.707 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:53:01.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 28 03:53:01.916: INFO: Waiting up to 5m0s for pod "downward-api-444a4d21-c104-4826-b21a-18eac1001374" in namespace "downward-api-7915" to be "success or failure" Aug 28 03:53:01.955: INFO: Pod "downward-api-444a4d21-c104-4826-b21a-18eac1001374": Phase="Pending", Reason="", readiness=false. Elapsed: 38.851554ms Aug 28 03:53:03.962: INFO: Pod "downward-api-444a4d21-c104-4826-b21a-18eac1001374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045987825s Aug 28 03:53:05.969: INFO: Pod "downward-api-444a4d21-c104-4826-b21a-18eac1001374": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053494545s Aug 28 03:53:08.005: INFO: Pod "downward-api-444a4d21-c104-4826-b21a-18eac1001374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088671284s STEP: Saw pod success Aug 28 03:53:08.005: INFO: Pod "downward-api-444a4d21-c104-4826-b21a-18eac1001374" satisfied condition "success or failure" Aug 28 03:53:08.034: INFO: Trying to get logs from node jerma-worker2 pod downward-api-444a4d21-c104-4826-b21a-18eac1001374 container dapi-container: STEP: delete the pod Aug 28 03:53:08.106: INFO: Waiting for pod downward-api-444a4d21-c104-4826-b21a-18eac1001374 to disappear Aug 28 03:53:08.112: INFO: Pod downward-api-444a4d21-c104-4826-b21a-18eac1001374 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:53:08.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7915" for this suite. • [SLOW TEST:6.351 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":246,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:53:08.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 03:53:12.384: INFO: Waiting up to 5m0s for pod "client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1" in namespace "pods-8882" to be "success or failure" Aug 28 03:53:12.389: INFO: Pod "client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.181563ms Aug 28 03:53:14.396: INFO: Pod "client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011538856s Aug 28 03:53:16.409: INFO: Pod "client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025112711s Aug 28 03:53:18.420: INFO: Pod "client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035591363s STEP: Saw pod success Aug 28 03:53:18.420: INFO: Pod "client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1" satisfied condition "success or failure" Aug 28 03:53:18.428: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1 container env3cont: STEP: delete the pod Aug 28 03:53:18.446: INFO: Waiting for pod client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1 to disappear Aug 28 03:53:18.450: INFO: Pod client-envvars-05e5eb1d-f404-4f05-a32c-7dbfef5b88f1 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:53:18.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8882" for this suite. • [SLOW TEST:10.323 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":261,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:53:18.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 28 03:53:18.563: INFO: Created pod &Pod{ObjectMeta:{dns-1661 dns-1661 /api/v1/namespaces/dns-1661/pods/dns-1661 1e0e63f8-e46b-42e4-8ed9-57bba6fe47b5 4474549 0 2020-08-28 03:53:18 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pmdcb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pmdcb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pmdcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Aug 28 03:53:22.592: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1661 PodName:dns-1661 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 03:53:22.593: INFO: >>> kubeConfig: /root/.kube/config I0828 03:53:22.677262 8 log.go:172] (0x40028b0580) (0x40011ea0a0) Create stream I0828 03:53:22.677892 8 log.go:172] (0x40028b0580) (0x40011ea0a0) Stream added, broadcasting: 1 I0828 03:53:22.694351 8 log.go:172] (0x40028b0580) Reply frame received for 1 I0828 03:53:22.694942 8 log.go:172] (0x40028b0580) (0x4000ee4820) Create stream I0828 03:53:22.695015 8 log.go:172] (0x40028b0580) (0x4000ee4820) Stream added, broadcasting: 3 I0828 03:53:22.696501 8 log.go:172] (0x40028b0580) Reply frame received for 3 I0828 03:53:22.696799 8 log.go:172] (0x40028b0580) (0x400183d040) Create stream I0828 03:53:22.696864 8 log.go:172] (0x40028b0580) (0x400183d040) Stream added, broadcasting: 5 I0828 03:53:22.699015 8 log.go:172] (0x40028b0580) Reply frame received for 5 I0828 03:53:22.781148 8 log.go:172] (0x40028b0580) Data frame received for 3 I0828 03:53:22.781383 8 log.go:172] (0x4000ee4820) (3) Data frame handling I0828 03:53:22.781785 8 log.go:172] (0x4000ee4820) (3) Data frame sent I0828 03:53:22.783650 8 log.go:172] (0x40028b0580) Data frame received for 3 I0828 03:53:22.783810 8 log.go:172] (0x4000ee4820) (3) Data frame handling I0828 03:53:22.784328 8 log.go:172] (0x40028b0580) Data frame received for 5 I0828 03:53:22.784401 8 log.go:172] (0x400183d040) (5) Data frame handling I0828 03:53:22.785474 8 log.go:172] (0x40028b0580) Data frame received for 1 I0828 03:53:22.785624 8 log.go:172] (0x40011ea0a0) (1) Data frame handling I0828 03:53:22.785838 8 log.go:172] (0x40011ea0a0) (1) Data frame sent I0828 03:53:22.787497 8 log.go:172] (0x40028b0580) (0x40011ea0a0) Stream removed, broadcasting: 1 I0828 03:53:22.789555 8 log.go:172] (0x40028b0580) Go away received I0828 03:53:22.792150 8 log.go:172] (0x40028b0580) (0x40011ea0a0) Stream removed, broadcasting: 1 I0828 03:53:22.792514 8 log.go:172] (0x40028b0580) (0x4000ee4820) Stream removed, broadcasting: 3 I0828 03:53:22.792815 8 log.go:172] (0x40028b0580) (0x400183d040) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 28 03:53:22.793: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1661 PodName:dns-1661 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 03:53:22.793: INFO: >>> kubeConfig: /root/.kube/config I0828 03:53:22.881009 8 log.go:172] (0x400276a630) (0x4001480000) Create stream I0828 03:53:22.881298 8 log.go:172] (0x400276a630) (0x4001480000) Stream added, broadcasting: 1 I0828 03:53:22.886859 8 log.go:172] (0x400276a630) Reply frame received for 1 I0828 03:53:22.887108 8 log.go:172] (0x400276a630) (0x4001720000) Create stream I0828 03:53:22.887200 8 log.go:172] (0x400276a630) (0x4001720000) Stream added, broadcasting: 3 I0828 03:53:22.889418 8 log.go:172] (0x400276a630) Reply frame received for 3 I0828 03:53:22.889672 8 log.go:172] (0x400276a630) (0x4001720140) Create stream I0828 03:53:22.889802 8 log.go:172] (0x400276a630) (0x4001720140) Stream added, broadcasting: 5 I0828 03:53:22.891493 8 log.go:172] (0x400276a630) Reply frame received for 5 I0828 03:53:22.979024 8 log.go:172] (0x400276a630) Data frame received for 3 I0828 03:53:22.979191 8 log.go:172] (0x4001720000) (3) Data frame handling I0828 03:53:22.979304 8 log.go:172] (0x4001720000) (3) Data frame sent I0828 03:53:22.983070 8 log.go:172] (0x400276a630) Data frame received for 3 I0828 03:53:22.983230 8 log.go:172] (0x4001720000) (3) Data frame handling I0828 03:53:22.983401 8 log.go:172] (0x400276a630) Data frame received for 5 I0828 03:53:22.983504 8 log.go:172] (0x4001720140) (5) Data frame handling I0828 03:53:22.985467 8 log.go:172] (0x400276a630) Data frame received for 1 I0828 03:53:22.985548 8 log.go:172] (0x4001480000) (1) Data frame handling I0828 03:53:22.985641 8 log.go:172] (0x4001480000) (1) Data frame sent I0828 03:53:22.985730 8 log.go:172] (0x400276a630) (0x4001480000) Stream removed, broadcasting: 1 I0828 03:53:22.985851 8 log.go:172] (0x400276a630) Go away received I0828 03:53:22.986230 8 log.go:172] (0x400276a630) (0x4001480000) Stream removed, broadcasting: 1 I0828 03:53:22.986350 8 log.go:172] (0x400276a630) (0x4001720000) Stream removed, broadcasting: 3 I0828 03:53:22.986415 8 log.go:172] (0x400276a630) (0x4001720140) Stream removed, broadcasting: 5 Aug 28 03:53:22.986: INFO: Deleting pod dns-1661... [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:53:23.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1661" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":13,"skipped":267,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:53:23.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:53:34.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4413" for this suite. • [SLOW TEST:11.866 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":14,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:53:34.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 28 03:53:35.000: INFO: Waiting up to 5m0s for pod "downward-api-1622dee5-9c33-46fd-97df-63a38e897a93" in namespace "downward-api-9722" to be "success or failure" Aug 28 03:53:35.017: INFO: Pod "downward-api-1622dee5-9c33-46fd-97df-63a38e897a93": Phase="Pending", Reason="", readiness=false. Elapsed: 16.929087ms Aug 28 03:53:37.091: INFO: Pod "downward-api-1622dee5-9c33-46fd-97df-63a38e897a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091473768s Aug 28 03:53:39.112: INFO: Pod "downward-api-1622dee5-9c33-46fd-97df-63a38e897a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112424028s STEP: Saw pod success Aug 28 03:53:39.113: INFO: Pod "downward-api-1622dee5-9c33-46fd-97df-63a38e897a93" satisfied condition "success or failure" Aug 28 03:53:39.117: INFO: Trying to get logs from node jerma-worker2 pod downward-api-1622dee5-9c33-46fd-97df-63a38e897a93 container dapi-container: STEP: delete the pod Aug 28 03:53:39.630: INFO: Waiting for pod downward-api-1622dee5-9c33-46fd-97df-63a38e897a93 to disappear Aug 28 03:53:39.767: INFO: Pod downward-api-1622dee5-9c33-46fd-97df-63a38e897a93 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:53:39.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9722" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":290,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:53:39.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 03:53:44.164: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 03:53:46.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183624, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183624, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183624, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734183624, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 03:53:49.612: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:53:59.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5152" for this suite. STEP: Destroying namespace "webhook-5152-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.142 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":16,"skipped":290,"failed":0} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:54:00.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-2139 STEP: creating replication controller nodeport-test in namespace services-2139 I0828 03:54:01.831105 8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-2139, replica count: 2 I0828 03:54:04.884491 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0828 03:54:07.887288 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 28 03:54:07.888: INFO: Creating new exec pod Aug 28 03:54:19.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2139 execpodcnjzt -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 28 03:54:45.033: INFO: stderr: "I0828 03:54:44.923953 36 log.go:172] (0x4000c891e0) (0x4000a78000) Create stream\nI0828 03:54:44.926642 36 log.go:172] (0x4000c891e0) (0x4000a78000) Stream added, broadcasting: 1\nI0828 03:54:44.938196 36 log.go:172] (0x4000c891e0) Reply frame received for 1\nI0828 03:54:44.938725 36 log.go:172] (0x4000c891e0) (0x4000c82280) Create stream\nI0828 03:54:44.938822 36 log.go:172] (0x4000c891e0) (0x4000c82280) Stream added, broadcasting: 3\nI0828 03:54:44.940263 36 log.go:172] (0x4000c891e0) Reply frame received for 3\nI0828 03:54:44.940912 36 log.go:172] (0x4000c891e0) (0x4000a90000) Create stream\nI0828 03:54:44.941051 36 log.go:172] (0x4000c891e0) (0x4000a90000) Stream added, broadcasting: 5\nI0828 03:54:44.942667 36 log.go:172] (0x4000c891e0) Reply frame received for 5\nI0828 03:54:45.009537 36 log.go:172] (0x4000c891e0) Data frame received for 5\nI0828 03:54:45.009820 36 log.go:172] (0x4000c891e0) Data frame received for 3\nI0828 03:54:45.009909 36 log.go:172] (0x4000a90000) (5) Data frame handling\nI0828 03:54:45.010236 36 log.go:172] (0x4000c82280) (3) Data frame handling\nI0828 03:54:45.010487 36 log.go:172] (0x4000c891e0) Data frame received for 1\nI0828 03:54:45.010565 36 log.go:172] (0x4000a78000) (1) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI0828 03:54:45.011957 36 log.go:172] (0x4000a78000) (1) Data frame sent\nI0828 03:54:45.012171 36 log.go:172] (0x4000a90000) (5) Data frame sent\nI0828 03:54:45.012249 36 log.go:172] (0x4000c891e0) Data frame received for 5\nI0828 03:54:45.012316 36 log.go:172] (0x4000a90000) (5) Data frame handling\nI0828 03:54:45.012396 36 log.go:172] (0x4000a90000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0828 03:54:45.012466 36 log.go:172] (0x4000c891e0) Data frame received for 5\nI0828 03:54:45.012530 36 log.go:172] (0x4000a90000) (5) Data frame handling\nI0828 03:54:45.014022 36 log.go:172] (0x4000c891e0) (0x4000a78000) Stream removed, broadcasting: 1\nI0828 03:54:45.015268 36 log.go:172] (0x4000c891e0) Go away received\nI0828 03:54:45.018630 36 log.go:172] (0x4000c891e0) (0x4000a78000) Stream removed, broadcasting: 1\nI0828 03:54:45.019000 36 log.go:172] (0x4000c891e0) (0x4000c82280) Stream removed, broadcasting: 3\nI0828 03:54:45.019283 36 log.go:172] (0x4000c891e0) (0x4000a90000) Stream removed, broadcasting: 5\n" Aug 28 03:54:45.034: INFO: stdout: "" Aug 28 03:54:45.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2139 execpodcnjzt -- /bin/sh -x -c nc -zv -t -w 2 10.109.121.157 80' Aug 28 03:54:46.658: INFO: stderr: "I0828 03:54:46.527022 67 log.go:172] (0x400094e0b0) (0x40004d15e0) Create stream\nI0828 03:54:46.530102 67 log.go:172] (0x400094e0b0) (0x40004d15e0) Stream added, broadcasting: 1\nI0828 03:54:46.547011 67 log.go:172] (0x400094e0b0) Reply frame received for 1\nI0828 03:54:46.547648 67 log.go:172] (0x400094e0b0) (0x40008e4000) Create stream\nI0828 03:54:46.547705 67 log.go:172] (0x400094e0b0) (0x40008e4000) Stream added, broadcasting: 3\nI0828 03:54:46.562892 67 log.go:172] (0x400094e0b0) Reply frame received for 3\nI0828 03:54:46.563356 67 log.go:172] (0x400094e0b0) (0x40008e40a0) Create stream\nI0828 03:54:46.563457 67 log.go:172] (0x400094e0b0) (0x40008e40a0) Stream added, broadcasting: 5\nI0828 03:54:46.567627 67 log.go:172] (0x400094e0b0) Reply frame received for 5\nI0828 03:54:46.636233 67 log.go:172] (0x400094e0b0) Data frame received for 3\nI0828 03:54:46.636442 67 log.go:172] (0x400094e0b0) Data frame received for 5\nI0828 03:54:46.636541 67 log.go:172] (0x40008e4000) (3) Data frame handling\nI0828 03:54:46.636820 67 log.go:172] (0x400094e0b0) Data frame received for 1\nI0828 03:54:46.636958 67 log.go:172] (0x40004d15e0) (1) Data frame handling\nI0828 03:54:46.637082 67 log.go:172] (0x40008e40a0) (5) Data frame handling\nI0828 03:54:46.638513 67 log.go:172] (0x40004d15e0) (1) Data frame sent\nI0828 03:54:46.639168 67 log.go:172] (0x400094e0b0) (0x40004d15e0) Stream removed, broadcasting: 1\nI0828 03:54:46.639324 67 log.go:172] (0x40008e40a0) (5) Data frame sent\nI0828 03:54:46.639443 67 log.go:172] (0x400094e0b0) Data frame received for 5\n+ nc -zv -t -w 2 10.109.121.157 80\nConnection to 10.109.121.157 80 port [tcp/http] succeeded!\nI0828 03:54:46.641882 67 log.go:172] (0x40008e40a0) (5) Data frame handling\nI0828 03:54:46.642772 67 log.go:172] (0x400094e0b0) Go away received\nI0828 03:54:46.646441 67 log.go:172] (0x400094e0b0) (0x40004d15e0) Stream removed, broadcasting: 1\nI0828 03:54:46.646702 67 log.go:172] (0x400094e0b0) (0x40008e4000) Stream removed, broadcasting: 3\nI0828 03:54:46.646882 67 log.go:172] (0x400094e0b0) (0x40008e40a0) Stream removed, broadcasting: 5\n" Aug 28 03:54:46.659: INFO: stdout: "" Aug 28 03:54:46.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2139 execpodcnjzt -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 32517' Aug 28 03:54:48.095: INFO: stderr: "I0828 03:54:47.997383 92 log.go:172] (0x4000a3a0b0) (0x40007c7d60) Create stream\nI0828 03:54:48.001325 92 log.go:172] (0x4000a3a0b0) (0x40007c7d60) Stream added, broadcasting: 1\nI0828 03:54:48.011370 92 log.go:172] (0x4000a3a0b0) Reply frame received for 1\nI0828 03:54:48.012059 92 log.go:172] (0x4000a3a0b0) (0x4000730000) Create stream\nI0828 03:54:48.012144 92 log.go:172] (0x4000a3a0b0) (0x4000730000) Stream added, broadcasting: 3\nI0828 03:54:48.013395 92 log.go:172] (0x4000a3a0b0) Reply frame received for 3\nI0828 03:54:48.013607 92 log.go:172] (0x4000a3a0b0) (0x40007c7e00) Create stream\nI0828 03:54:48.013657 92 log.go:172] (0x4000a3a0b0) (0x40007c7e00) Stream added, broadcasting: 5\nI0828 03:54:48.014608 92 log.go:172] (0x4000a3a0b0) Reply frame received for 5\nI0828 03:54:48.080037 92 log.go:172] (0x4000a3a0b0) Data frame received for 5\nI0828 03:54:48.080425 92 log.go:172] (0x40007c7e00) (5) Data frame handling\nI0828 03:54:48.081209 92 log.go:172] (0x4000a3a0b0) Data frame received for 3\nI0828 03:54:48.081357 92 log.go:172] (0x4000730000) (3) Data frame handling\nI0828 03:54:48.081893 92 log.go:172] (0x40007c7e00) (5) Data frame sent\nI0828 03:54:48.082443 92 log.go:172] (0x4000a3a0b0) Data frame received for 5\nI0828 03:54:48.082493 92 log.go:172] (0x40007c7e00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 32517\nConnection to 172.18.0.6 32517 port [tcp/32517] succeeded!\nI0828 03:54:48.082823 92 log.go:172] (0x4000a3a0b0) Data frame received for 1\nI0828 03:54:48.082891 92 log.go:172] (0x40007c7d60) (1) Data frame handling\nI0828 03:54:48.082956 92 log.go:172] (0x40007c7d60) (1) Data frame sent\nI0828 03:54:48.084256 92 log.go:172] (0x4000a3a0b0) (0x40007c7d60) Stream removed, broadcasting: 1\nI0828 03:54:48.085800 92 log.go:172] (0x4000a3a0b0) Go away received\nI0828 03:54:48.087232 92 log.go:172] (0x4000a3a0b0) (0x40007c7d60) Stream removed, broadcasting: 1\nI0828 03:54:48.087633 92 log.go:172] (0x4000a3a0b0) (0x4000730000) Stream removed, broadcasting: 3\nI0828 03:54:48.087792 92 log.go:172] (0x4000a3a0b0) (0x40007c7e00) Stream removed, broadcasting: 5\n" Aug 28 03:54:48.096: INFO: stdout: "" Aug 28 03:54:48.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2139 execpodcnjzt -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 32517' Aug 28 03:54:49.515: INFO: stderr: "I0828 03:54:49.397642 116 log.go:172] (0x400010b600) (0x40007cbea0) Create stream\nI0828 03:54:49.404224 116 log.go:172] (0x400010b600) (0x40007cbea0) Stream added, broadcasting: 1\nI0828 03:54:49.418168 116 log.go:172] (0x400010b600) Reply frame received for 1\nI0828 03:54:49.419219 116 log.go:172] (0x400010b600) (0x40007cbf40) Create stream\nI0828 03:54:49.419320 116 log.go:172] (0x400010b600) (0x40007cbf40) Stream added, broadcasting: 3\nI0828 03:54:49.421453 116 log.go:172] (0x400010b600) Reply frame received for 3\nI0828 03:54:49.421869 116 log.go:172] (0x400010b600) (0x40005a5540) Create stream\nI0828 03:54:49.421981 116 log.go:172] (0x400010b600) (0x40005a5540) Stream added, broadcasting: 5\nI0828 03:54:49.423455 116 log.go:172] (0x400010b600) Reply frame received for 5\nI0828 03:54:49.494330 116 log.go:172] (0x400010b600) Data frame received for 5\nI0828 03:54:49.494668 116 log.go:172] (0x400010b600) Data frame received for 3\nI0828 03:54:49.494833 116 log.go:172] (0x40007cbf40) (3) Data frame handling\nI0828 03:54:49.495038 116 log.go:172] (0x40005a5540) (5) Data frame handling\nI0828 03:54:49.495601 116 log.go:172] (0x400010b600) Data frame received for 1\nI0828 03:54:49.495707 116 log.go:172] (0x40007cbea0) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 32517\nI0828 03:54:49.497602 116 log.go:172] (0x40007cbea0) (1) Data frame sent\nI0828 03:54:49.497965 116 log.go:172] (0x40005a5540) (5) Data frame sent\nI0828 03:54:49.498102 116 log.go:172] (0x400010b600) Data frame received for 5\nI0828 03:54:49.498176 116 log.go:172] (0x40005a5540) (5) Data frame handling\nI0828 03:54:49.498295 116 log.go:172] (0x40005a5540) (5) Data frame sent\nConnection to 172.18.0.3 32517 port [tcp/32517] succeeded!\nI0828 03:54:49.498396 116 log.go:172] (0x400010b600) Data frame received for 5\nI0828 03:54:49.498462 116 log.go:172] (0x40005a5540) (5) Data frame handling\nI0828 03:54:49.499370 116 log.go:172] (0x400010b600) (0x40007cbea0) Stream removed, broadcasting: 1\nI0828 03:54:49.501394 116 log.go:172] (0x400010b600) Go away received\nI0828 03:54:49.503803 116 log.go:172] (0x400010b600) (0x40007cbea0) Stream removed, broadcasting: 1\nI0828 03:54:49.504050 116 log.go:172] (0x400010b600) (0x40007cbf40) Stream removed, broadcasting: 3\nI0828 03:54:49.504245 116 log.go:172] (0x400010b600) (0x40005a5540) Stream removed, broadcasting: 5\n" Aug 28 03:54:49.516: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:54:49.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2139" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:48.602 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":17,"skipped":293,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:54:49.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 28 03:54:59.693: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:55:00.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7496" for this suite. • [SLOW TEST:11.580 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":299,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:55:01.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 28 03:55:02.934: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 03:55:03.440: INFO: Waiting for terminating namespaces to be deleted... Aug 28 03:55:03.656: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 28 03:55:03.745: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 28 03:55:03.746: INFO: Container kube-proxy ready: true, restart count 0 Aug 28 03:55:03.746: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 28 03:55:03.746: INFO: Container kindnet-cni ready: true, restart count 0 Aug 28 03:55:03.746: INFO: rally-4fff77c6-yezpslr2 from c-rally-4fff77c6-644zqh6r started at 2020-08-28 03:54:19 +0000 UTC (1 container statuses recorded) Aug 28 03:55:03.746: INFO: Container rally-4fff77c6-yezpslr2 ready: true, restart count 0 Aug 28 03:55:03.746: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 28 03:55:03.746: INFO: Container app ready: true, restart count 0 Aug 28 03:55:03.746: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 28 03:55:04.038: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 28 03:55:04.038: INFO: Container kube-proxy ready: true, restart count 0 Aug 28 03:55:04.038: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded) Aug 28 03:55:04.038: INFO: Container httpd ready: true, restart count 0 Aug 28 03:55:04.038: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 28 03:55:04.038: INFO: Container kindnet-cni ready: true, restart count 0 Aug 28 03:55:04.038: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 28 03:55:04.038: INFO: Container app ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-85c3329d-c3c3-46e3-9f45-c145210bb271 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-85c3329d-c3c3-46e3-9f45-c145210bb271 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-85c3329d-c3c3-46e3-9f45-c145210bb271 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:55:15.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4441" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:14.751 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":19,"skipped":301,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:55:15.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-wjsp STEP: Creating a pod to test atomic-volume-subpath Aug 28 03:55:15.978: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wjsp" in namespace "subpath-8173" to be "success or failure" Aug 28 03:55:16.044: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Pending", Reason="", readiness=false. Elapsed: 66.069515ms Aug 28 03:55:18.141: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163559104s Aug 28 03:55:20.149: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 4.171171775s Aug 28 03:55:22.206: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 6.228211862s Aug 28 03:55:24.277: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 8.299160137s Aug 28 03:55:26.464: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 10.486532122s Aug 28 03:55:28.470: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 12.492141246s Aug 28 03:55:30.477: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 14.499116845s Aug 28 03:55:32.485: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 16.506999377s Aug 28 03:55:34.491: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 18.513422413s Aug 28 03:55:36.521: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 20.542819441s Aug 28 03:55:38.530: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Running", Reason="", readiness=true. Elapsed: 22.552115s Aug 28 03:55:40.539: INFO: Pod "pod-subpath-test-projected-wjsp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.56066591s STEP: Saw pod success Aug 28 03:55:40.539: INFO: Pod "pod-subpath-test-projected-wjsp" satisfied condition "success or failure" Aug 28 03:55:40.543: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-wjsp container test-container-subpath-projected-wjsp: STEP: delete the pod Aug 28 03:55:40.586: INFO: Waiting for pod pod-subpath-test-projected-wjsp to disappear Aug 28 03:55:40.630: INFO: Pod pod-subpath-test-projected-wjsp no longer exists STEP: Deleting pod pod-subpath-test-projected-wjsp Aug 28 03:55:40.630: INFO: Deleting pod "pod-subpath-test-projected-wjsp" in namespace "subpath-8173" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:55:40.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8173" for this suite. • [SLOW TEST:24.781 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":20,"skipped":304,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:55:40.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:55:51.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7195" for this suite. • [SLOW TEST:11.248 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":21,"skipped":306,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:55:51.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 STEP: creating an pod Aug 28 03:55:52.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1166 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 28 03:55:53.312: INFO: stderr: "" Aug 28 03:55:53.312: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Aug 28 03:55:53.313: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 28 03:55:53.314: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1166" to be "running and ready, or succeeded" Aug 28 03:55:53.442: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 127.711502ms Aug 28 03:55:55.529: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215192027s Aug 28 03:55:57.537: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.222507577s Aug 28 03:55:57.537: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 28 03:55:57.537: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 28 03:55:57.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1166' Aug 28 03:55:58.902: INFO: stderr: "" Aug 28 03:55:58.902: INFO: stdout: "I0828 03:55:56.930454 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/vf4 209\nI0828 03:55:57.130637 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/m7z 315\nI0828 03:55:57.330758 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/6skc 485\nI0828 03:55:57.530652 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/w47r 401\nI0828 03:55:57.730636 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/bqb 398\nI0828 03:55:57.930600 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/nq7j 295\nI0828 03:55:58.130564 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/4cs 501\nI0828 03:55:58.330588 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/k8td 420\nI0828 03:55:58.530584 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/27q 200\nI0828 03:55:58.730611 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/qfmj 407\n" STEP: limiting log lines Aug 28 03:55:58.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1166 --tail=1' Aug 28 03:56:00.211: INFO: stderr: "" Aug 28 03:56:00.211: INFO: stdout: "I0828 03:56:00.130627 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/kjkb 217\n" Aug 28 03:56:00.211: INFO: got output "I0828 03:56:00.130627 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/kjkb 217\n" STEP: limiting log bytes Aug 28 03:56:00.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1166 --limit-bytes=1' Aug 28 03:56:01.671: INFO: stderr: "" Aug 28 03:56:01.671: INFO: stdout: "I" Aug 28 03:56:01.671: INFO: got output "I" STEP: exposing timestamps Aug 28 03:56:01.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1166 --tail=1 --timestamps' Aug 28 03:56:02.966: INFO: stderr: "" Aug 28 03:56:02.966: INFO: stdout: "2020-08-28T03:56:02.930744061Z I0828 03:56:02.930591 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/8g5w 273\n" Aug 28 03:56:02.966: INFO: got output "2020-08-28T03:56:02.930744061Z I0828 03:56:02.930591 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/8g5w 273\n" STEP: restricting to a time range Aug 28 03:56:05.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1166 --since=1s' Aug 28 03:56:07.279: INFO: stderr: "" Aug 28 03:56:07.280: INFO: stdout: "I0828 03:56:05.930640 1 logs_generator.go:76] 45 GET /api/v1/namespaces/default/pods/vn5 373\nI0828 03:56:06.130633 1 logs_generator.go:76] 46 POST /api/v1/namespaces/ns/pods/7md 475\nI0828 03:56:06.330595 1 logs_generator.go:76] 47 POST /api/v1/namespaces/kube-system/pods/dhg 522\nI0828 03:56:06.530552 1 logs_generator.go:76] 48 GET /api/v1/namespaces/ns/pods/8x6 328\nI0828 03:56:06.730625 1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/f6r8 286\nI0828 03:56:06.930619 1 logs_generator.go:76] 50 GET /api/v1/namespaces/default/pods/8nf 583\nI0828 03:56:07.130625 1 logs_generator.go:76] 51 GET /api/v1/namespaces/ns/pods/sfn4 528\n" Aug 28 03:56:07.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1166 --since=24h' Aug 28 03:56:08.651: INFO: stderr: "" Aug 28 03:56:08.651: INFO: stdout: "I0828 03:55:56.930454 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/vf4 209\nI0828 03:55:57.130637 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/m7z 315\nI0828 03:55:57.330758 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/6skc 485\nI0828 03:55:57.530652 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/w47r 401\nI0828 03:55:57.730636 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/bqb 398\nI0828 03:55:57.930600 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/nq7j 295\nI0828 03:55:58.130564 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/4cs 501\nI0828 03:55:58.330588 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/k8td 420\nI0828 03:55:58.530584 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/27q 200\nI0828 03:55:58.730611 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/qfmj 407\nI0828 03:55:58.930592 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/75tc 338\nI0828 03:55:59.130609 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/hnrt 466\nI0828 03:55:59.330603 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/kbln 471\nI0828 03:55:59.530604 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/5qm 420\nI0828 03:55:59.730557 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/ncfg 546\nI0828 03:55:59.930608 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/7pkx 307\nI0828 03:56:00.130627 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/kjkb 217\nI0828 03:56:00.330569 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/lqg 526\nI0828 03:56:00.530584 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/m55 521\nI0828 03:56:00.730624 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/2fdn 531\nI0828 03:56:00.930627 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/7dz2 342\nI0828 03:56:01.130622 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/pdg 579\nI0828 03:56:01.330597 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/wlv 455\nI0828 03:56:01.530687 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/6lzs 283\nI0828 03:56:01.730624 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/9cfs 552\nI0828 03:56:01.930565 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/bhvc 331\nI0828 03:56:02.130581 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/688f 376\nI0828 03:56:02.330625 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/4qp 398\nI0828 03:56:02.530597 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/bzj 536\nI0828 03:56:02.730583 1 logs_generator.go:76] 29 POST /api/v1/namespaces/default/pods/4sf 533\nI0828 03:56:02.930591 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/8g5w 273\nI0828 03:56:03.130630 1 logs_generator.go:76] 31 GET /api/v1/namespaces/default/pods/jhvh 270\nI0828 03:56:03.330643 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/default/pods/dgfd 472\nI0828 03:56:03.530618 1 logs_generator.go:76] 33 GET /api/v1/namespaces/kube-system/pods/7mz 398\nI0828 03:56:03.730641 1 logs_generator.go:76] 34 POST /api/v1/namespaces/kube-system/pods/gbfv 389\nI0828 03:56:03.930621 1 logs_generator.go:76] 35 GET /api/v1/namespaces/default/pods/55r 365\nI0828 03:56:04.130766 1 logs_generator.go:76] 36 POST /api/v1/namespaces/default/pods/k57 350\nI0828 03:56:04.330667 1 logs_generator.go:76] 37 POST /api/v1/namespaces/default/pods/9kq 201\nI0828 03:56:04.530615 1 logs_generator.go:76] 38 PUT /api/v1/namespaces/ns/pods/ztxb 362\nI0828 03:56:04.730627 1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/l6c5 401\nI0828 03:56:04.930616 1 logs_generator.go:76] 40 GET /api/v1/namespaces/ns/pods/qjdp 246\nI0828 03:56:05.130635 1 logs_generator.go:76] 41 POST /api/v1/namespaces/kube-system/pods/j7vn 342\nI0828 03:56:05.330653 1 logs_generator.go:76] 42 PUT /api/v1/namespaces/default/pods/55p 404\nI0828 03:56:05.530641 1 logs_generator.go:76] 43 GET /api/v1/namespaces/default/pods/hwp 527\nI0828 03:56:05.730629 1 logs_generator.go:76] 44 PUT /api/v1/namespaces/kube-system/pods/6gs 427\nI0828 03:56:05.930640 1 logs_generator.go:76] 45 GET /api/v1/namespaces/default/pods/vn5 373\nI0828 03:56:06.130633 1 logs_generator.go:76] 46 POST /api/v1/namespaces/ns/pods/7md 475\nI0828 03:56:06.330595 1 logs_generator.go:76] 47 POST /api/v1/namespaces/kube-system/pods/dhg 522\nI0828 03:56:06.530552 1 logs_generator.go:76] 48 GET /api/v1/namespaces/ns/pods/8x6 328\nI0828 03:56:06.730625 1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/f6r8 286\nI0828 03:56:06.930619 1 logs_generator.go:76] 50 GET /api/v1/namespaces/default/pods/8nf 583\nI0828 03:56:07.130625 1 logs_generator.go:76] 51 GET /api/v1/namespaces/ns/pods/sfn4 528\nI0828 03:56:07.330592 1 logs_generator.go:76] 52 PUT /api/v1/namespaces/default/pods/qpc 467\nI0828 03:56:07.530598 1 logs_generator.go:76] 53 PUT /api/v1/namespaces/ns/pods/9lh 319\nI0828 03:56:07.730597 1 logs_generator.go:76] 54 POST /api/v1/namespaces/ns/pods/l2cf 485\nI0828 03:56:07.930578 1 logs_generator.go:76] 55 GET /api/v1/namespaces/ns/pods/csn 513\nI0828 03:56:08.130628 1 logs_generator.go:76] 56 PUT /api/v1/namespaces/default/pods/lvhp 402\nI0828 03:56:08.330640 1 logs_generator.go:76] 57 POST /api/v1/namespaces/ns/pods/s4jt 465\nI0828 03:56:08.530628 1 logs_generator.go:76] 58 GET /api/v1/namespaces/default/pods/h5hr 507\n" [AfterEach] Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Aug 28 03:56:08.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1166' Aug 28 03:56:21.819: INFO: stderr: "" Aug 28 03:56:21.819: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:56:21.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1166" for this suite. • [SLOW TEST:29.931 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":22,"skipped":308,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:56:21.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 03:56:22.067: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e1b10d75-668c-4fb0-8de0-56ea150e6376" in namespace "security-context-test-1708" to be "success or failure" Aug 28 03:56:22.139: INFO: Pod "busybox-privileged-false-e1b10d75-668c-4fb0-8de0-56ea150e6376": Phase="Pending", Reason="", readiness=false. Elapsed: 72.340117ms Aug 28 03:56:24.470: INFO: Pod "busybox-privileged-false-e1b10d75-668c-4fb0-8de0-56ea150e6376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.403217275s Aug 28 03:56:26.476: INFO: Pod "busybox-privileged-false-e1b10d75-668c-4fb0-8de0-56ea150e6376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.409202877s Aug 28 03:56:26.476: INFO: Pod "busybox-privileged-false-e1b10d75-668c-4fb0-8de0-56ea150e6376" satisfied condition "success or failure" Aug 28 03:56:26.494: INFO: Got logs for pod "busybox-privileged-false-e1b10d75-668c-4fb0-8de0-56ea150e6376": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:56:26.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1708" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":313,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:56:26.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 28 03:56:26.757: INFO: Waiting up to 5m0s for pod "pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b" in namespace "emptydir-9346" to be "success or failure" Aug 28 03:56:26.791: INFO: Pod "pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.958227ms Aug 28 03:56:28.838: INFO: Pod "pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081170851s Aug 28 03:56:30.844: INFO: Pod "pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08687136s Aug 28 03:56:32.850: INFO: Pod "pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092524493s STEP: Saw pod success Aug 28 03:56:32.850: INFO: Pod "pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b" satisfied condition "success or failure" Aug 28 03:56:32.853: INFO: Trying to get logs from node jerma-worker pod pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b container test-container: STEP: delete the pod Aug 28 03:56:33.002: INFO: Waiting for pod pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b to disappear Aug 28 03:56:33.005: INFO: Pod pod-343a9dd2-b45a-4424-bc13-ce0f1c8a145b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:56:33.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9346" for this suite. • [SLOW TEST:6.525 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:56:33.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 28 03:56:33.499: INFO: Waiting up to 5m0s for pod "pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1" in namespace "emptydir-7103" to be "success or failure" Aug 28 03:56:33.526: INFO: Pod "pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.857013ms Aug 28 03:56:35.589: INFO: Pod "pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089500643s Aug 28 03:56:37.596: INFO: Pod "pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096971315s Aug 28 03:56:39.656: INFO: Pod "pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15622837s STEP: Saw pod success Aug 28 03:56:39.656: INFO: Pod "pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1" satisfied condition "success or failure" Aug 28 03:56:39.874: INFO: Trying to get logs from node jerma-worker pod pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1 container test-container: STEP: delete the pod Aug 28 03:56:40.077: INFO: Waiting for pod pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1 to disappear Aug 28 03:56:40.088: INFO: Pod pod-dd3e7c4a-aabd-4111-8e78-46f0c3f9a3f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:56:40.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7103" for this suite. • [SLOW TEST:7.049 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":352,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:56:40.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-e78e2960-ec37-44b4-bcc6-0f3b2d5e2b5b STEP: Creating a pod to test consume configMaps Aug 28 03:56:40.242: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647" in namespace "projected-8637" to be "success or failure" Aug 28 03:56:40.287: INFO: Pod "pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647": Phase="Pending", Reason="", readiness=false. Elapsed: 44.561442ms Aug 28 03:56:42.294: INFO: Pod "pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051507992s Aug 28 03:56:44.301: INFO: Pod "pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058371069s Aug 28 03:56:46.308: INFO: Pod "pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06571497s STEP: Saw pod success Aug 28 03:56:46.308: INFO: Pod "pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647" satisfied condition "success or failure" Aug 28 03:56:46.313: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647 container projected-configmap-volume-test: STEP: delete the pod Aug 28 03:56:46.381: INFO: Waiting for pod pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647 to disappear Aug 28 03:56:46.438: INFO: Pod pod-projected-configmaps-6b894c88-0a0d-42b4-bbf1-d215246c5647 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:56:46.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8637" for this suite. • [SLOW TEST:6.353 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":352,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:56:46.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Aug 28 03:56:46.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9458' Aug 28 03:56:48.236: INFO: stderr: "" Aug 28 03:56:48.236: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 28 03:56:49.244: INFO: Selector matched 1 pods for map[app:agnhost] Aug 28 03:56:49.244: INFO: Found 0 / 1 Aug 28 03:56:50.245: INFO: Selector matched 1 pods for map[app:agnhost] Aug 28 03:56:50.245: INFO: Found 0 / 1 Aug 28 03:56:51.267: INFO: Selector matched 1 pods for map[app:agnhost] Aug 28 03:56:51.267: INFO: Found 1 / 1 Aug 28 03:56:51.268: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 28 03:56:51.293: INFO: Selector matched 1 pods for map[app:agnhost] Aug 28 03:56:51.293: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 28 03:56:51.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-dkx64 --namespace=kubectl-9458 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 28 03:56:52.572: INFO: stderr: "" Aug 28 03:56:52.572: INFO: stdout: "pod/agnhost-master-dkx64 patched\n" STEP: checking annotations Aug 28 03:56:52.578: INFO: Selector matched 1 pods for map[app:agnhost] Aug 28 03:56:52.578: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:56:52.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9458" for this suite. • [SLOW TEST:6.138 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433 should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":27,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:56:52.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276 STEP: creating the pod Aug 28 03:56:52.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2313' Aug 28 03:56:54.375: INFO: stderr: "" Aug 28 03:56:54.375: INFO: stdout: "pod/pause created\n" Aug 28 03:56:54.376: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 28 03:56:54.376: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2313" to be "running and ready" Aug 28 03:56:54.409: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 33.232585ms Aug 28 03:56:56.634: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257748292s Aug 28 03:56:58.775: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.399479286s Aug 28 03:56:58.776: INFO: Pod "pause" satisfied condition "running and ready" Aug 28 03:56:58.776: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Aug 28 03:56:58.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2313' Aug 28 03:57:00.061: INFO: stderr: "" Aug 28 03:57:00.061: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 28 03:57:00.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2313' Aug 28 03:57:01.330: INFO: stderr: "" Aug 28 03:57:01.331: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 28 03:57:01.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2313' Aug 28 03:57:02.620: INFO: stderr: "" Aug 28 03:57:02.620: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 28 03:57:02.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2313' Aug 28 03:57:03.875: INFO: stderr: "" Aug 28 03:57:03.875: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283 STEP: using delete to clean up resources Aug 28 03:57:03.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2313' Aug 28 03:57:05.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 03:57:05.157: INFO: stdout: "pod \"pause\" force deleted\n" Aug 28 03:57:05.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2313' Aug 28 03:57:06.466: INFO: stderr: "No resources found in kubectl-2313 namespace.\n" Aug 28 03:57:06.466: INFO: stdout: "" Aug 28 03:57:06.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2313 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 28 03:57:07.741: INFO: stderr: "" Aug 28 03:57:07.741: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:57:07.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2313" for this suite. • [SLOW TEST:15.158 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273 should update the label on a resource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":28,"skipped":421,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:57:07.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4053.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4053.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4053.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4053.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4053.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4053.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 03:57:16.078: INFO: DNS probes using dns-4053/dns-test-76da0aca-cb42-4fd8-a29d-a1d55cf3bf48 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:57:16.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4053" for this suite. • [SLOW TEST:8.530 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":29,"skipped":437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:57:16.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 28 03:57:23.430: INFO: Successfully updated pod "labelsupdate0d3f0b4b-dc66-47dc-9bfd-c64ab659523f" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:57:25.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6253" for this suite. • [SLOW TEST:9.646 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":472,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:57:25.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:57:43.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9116" for this suite. • [SLOW TEST:17.390 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":31,"skipped":472,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:57:43.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 03:58:05.853: INFO: Container started at 2020-08-28 03:57:46 +0000 UTC, pod became ready at 2020-08-28 03:58:05 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 03:58:05.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7377" for this suite. • [SLOW TEST:22.543 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":477,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 03:58:05.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5599 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Aug 28 03:58:06.092: INFO: Found 0 stateful pods, waiting for 3 Aug 28 03:58:16.101: INFO: Found 2 stateful pods, waiting for 3 Aug 28 03:58:26.102: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:58:26.102: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:58:26.103: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 28 03:58:26.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5599 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 03:58:27.645: INFO: stderr: "I0828 03:58:27.472230 546 log.go:172] (0x4000a7e0b0) (0x4000934000) Create stream\nI0828 03:58:27.477573 546 log.go:172] (0x4000a7e0b0) (0x4000934000) Stream added, broadcasting: 1\nI0828 03:58:27.487280 546 log.go:172] (0x4000a7e0b0) Reply frame received for 1\nI0828 03:58:27.487946 546 log.go:172] (0x4000a7e0b0) (0x40009ec000) Create stream\nI0828 03:58:27.488005 546 log.go:172] (0x4000a7e0b0) (0x40009ec000) Stream added, broadcasting: 3\nI0828 03:58:27.489512 546 log.go:172] (0x4000a7e0b0) Reply frame received for 3\nI0828 03:58:27.489773 546 log.go:172] (0x4000a7e0b0) (0x40007c79a0) Create stream\nI0828 03:58:27.489830 546 log.go:172] (0x4000a7e0b0) (0x40007c79a0) Stream added, broadcasting: 5\nI0828 03:58:27.491486 546 log.go:172] (0x4000a7e0b0) Reply frame received for 5\nI0828 03:58:27.582753 546 log.go:172] (0x4000a7e0b0) Data frame received for 5\nI0828 03:58:27.583087 546 log.go:172] (0x40007c79a0) (5) Data frame handling\nI0828 03:58:27.583863 546 log.go:172] (0x40007c79a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 03:58:27.626992 546 log.go:172] (0x4000a7e0b0) Data frame received for 3\nI0828 03:58:27.627175 546 log.go:172] (0x4000a7e0b0) Data frame received for 5\nI0828 03:58:27.627294 546 log.go:172] (0x40007c79a0) (5) Data frame handling\nI0828 03:58:27.627386 546 log.go:172] (0x40009ec000) (3) Data frame handling\nI0828 03:58:27.627526 546 log.go:172] (0x40009ec000) (3) Data frame sent\nI0828 03:58:27.627621 546 log.go:172] (0x4000a7e0b0) Data frame received for 3\nI0828 03:58:27.627710 546 log.go:172] (0x40009ec000) (3) Data frame handling\nI0828 03:58:27.628861 546 log.go:172] (0x4000a7e0b0) Data frame received for 1\nI0828 03:58:27.628930 546 log.go:172] (0x4000934000) (1) Data frame handling\nI0828 03:58:27.629009 546 log.go:172] (0x4000934000) (1) Data frame sent\nI0828 03:58:27.630608 546 log.go:172] (0x4000a7e0b0) (0x4000934000) Stream removed, broadcasting: 1\nI0828 03:58:27.632820 546 log.go:172] (0x4000a7e0b0) Go away received\nI0828 03:58:27.635693 546 log.go:172] (0x4000a7e0b0) (0x4000934000) Stream removed, broadcasting: 1\nI0828 03:58:27.636302 546 log.go:172] (0x4000a7e0b0) (0x40009ec000) Stream removed, broadcasting: 3\nI0828 03:58:27.636512 546 log.go:172] (0x4000a7e0b0) (0x40007c79a0) Stream removed, broadcasting: 5\n" Aug 28 03:58:27.646: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 03:58:27.646: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 28 03:58:37.693: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 28 03:58:47.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5599 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 03:58:49.235: INFO: stderr: "I0828 03:58:49.078028 569 log.go:172] (0x4000ad0000) (0x40009e4000) Create stream\nI0828 03:58:49.081548 569 log.go:172] (0x4000ad0000) (0x40009e4000) Stream added, broadcasting: 1\nI0828 03:58:49.094106 569 log.go:172] (0x4000ad0000) Reply frame received for 1\nI0828 03:58:49.094680 569 log.go:172] (0x4000ad0000) (0x4000811c20) Create stream\nI0828 03:58:49.094742 569 log.go:172] (0x4000ad0000) (0x4000811c20) Stream added, broadcasting: 3\nI0828 03:58:49.096243 569 log.go:172] (0x4000ad0000) Reply frame received for 3\nI0828 03:58:49.096478 569 log.go:172] (0x4000ad0000) (0x40009e40a0) Create stream\nI0828 03:58:49.096532 569 log.go:172] (0x4000ad0000) (0x40009e40a0) Stream added, broadcasting: 5\nI0828 03:58:49.097754 569 log.go:172] (0x4000ad0000) Reply frame received for 5\nI0828 03:58:49.207465 569 log.go:172] (0x4000ad0000) Data frame received for 5\nI0828 03:58:49.207858 569 log.go:172] (0x4000ad0000) Data frame received for 1\nI0828 03:58:49.208158 569 log.go:172] (0x4000ad0000) Data frame received for 3\nI0828 03:58:49.208366 569 log.go:172] (0x4000811c20) (3) Data frame handling\nI0828 03:58:49.208510 569 log.go:172] (0x40009e4000) (1) Data frame handling\nI0828 03:58:49.208867 569 log.go:172] (0x40009e40a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 03:58:49.210836 569 log.go:172] (0x40009e40a0) (5) Data frame sent\nI0828 03:58:49.210990 569 log.go:172] (0x4000811c20) (3) Data frame sent\nI0828 03:58:49.211288 569 log.go:172] (0x4000ad0000) Data frame received for 5\nI0828 03:58:49.211440 569 log.go:172] (0x40009e4000) (1) Data frame sent\nI0828 03:58:49.211849 569 log.go:172] (0x40009e40a0) (5) Data frame handling\nI0828 03:58:49.212081 569 log.go:172] (0x4000ad0000) Data frame received for 3\nI0828 03:58:49.212201 569 log.go:172] (0x4000811c20) (3) Data frame handling\nI0828 03:58:49.213238 569 log.go:172] (0x4000ad0000) (0x40009e4000) Stream removed, broadcasting: 1\nI0828 03:58:49.215352 569 log.go:172] (0x4000ad0000) Go away received\nI0828 03:58:49.220697 569 log.go:172] (0x4000ad0000) (0x40009e4000) Stream removed, broadcasting: 1\nI0828 03:58:49.221212 569 log.go:172] (0x4000ad0000) (0x4000811c20) Stream removed, broadcasting: 3\nI0828 03:58:49.221507 569 log.go:172] (0x4000ad0000) (0x40009e40a0) Stream removed, broadcasting: 5\n" Aug 28 03:58:49.236: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 28 03:58:49.237: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 28 03:58:59.275: INFO: Waiting for StatefulSet statefulset-5599/ss2 to complete update Aug 28 03:58:59.276: INFO: Waiting for Pod statefulset-5599/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 28 03:58:59.276: INFO: Waiting for Pod statefulset-5599/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 28 03:58:59.276: INFO: Waiting for Pod statefulset-5599/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 28 03:59:09.291: INFO: Waiting for StatefulSet statefulset-5599/ss2 to complete update Aug 28 03:59:09.292: INFO: Waiting for Pod statefulset-5599/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 28 03:59:09.292: INFO: Waiting for Pod statefulset-5599/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 28 03:59:19.298: INFO: Waiting for StatefulSet statefulset-5599/ss2 to complete update Aug 28 03:59:19.298: INFO: Waiting for Pod statefulset-5599/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Aug 28 03:59:29.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5599 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 03:59:30.881: INFO: stderr: "I0828 03:59:30.690246 592 log.go:172] (0x4000ad8bb0) (0x40006ce0a0) Create stream\nI0828 03:59:30.693926 592 log.go:172] (0x4000ad8bb0) (0x40006ce0a0) Stream added, broadcasting: 1\nI0828 03:59:30.708533 592 log.go:172] (0x4000ad8bb0) Reply frame received for 1\nI0828 03:59:30.710031 592 log.go:172] (0x4000ad8bb0) (0x400082bcc0) Create stream\nI0828 03:59:30.710159 592 log.go:172] (0x4000ad8bb0) (0x400082bcc0) Stream added, broadcasting: 3\nI0828 03:59:30.712113 592 log.go:172] (0x4000ad8bb0) Reply frame received for 3\nI0828 03:59:30.712507 592 log.go:172] (0x4000ad8bb0) (0x400076e000) Create stream\nI0828 03:59:30.712621 592 log.go:172] (0x4000ad8bb0) (0x400076e000) Stream added, broadcasting: 5\nI0828 03:59:30.714025 592 log.go:172] (0x4000ad8bb0) Reply frame received for 5\nI0828 03:59:30.813660 592 log.go:172] (0x4000ad8bb0) Data frame received for 5\nI0828 03:59:30.814011 592 log.go:172] (0x400076e000) (5) Data frame handling\nI0828 03:59:30.814770 592 log.go:172] (0x400076e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 03:59:30.853989 592 log.go:172] (0x4000ad8bb0) Data frame received for 3\nI0828 03:59:30.854168 592 log.go:172] (0x400082bcc0) (3) Data frame handling\nI0828 03:59:30.854261 592 log.go:172] (0x400082bcc0) (3) Data frame sent\nI0828 03:59:30.854363 592 log.go:172] (0x4000ad8bb0) Data frame received for 3\nI0828 03:59:30.854467 592 log.go:172] (0x400082bcc0) (3) Data frame handling\nI0828 03:59:30.854718 592 log.go:172] (0x4000ad8bb0) Data frame received for 5\nI0828 03:59:30.854892 592 log.go:172] (0x400076e000) (5) Data frame handling\nI0828 03:59:30.855413 592 log.go:172] (0x4000ad8bb0) Data frame received for 1\nI0828 03:59:30.855558 592 log.go:172] (0x40006ce0a0) (1) Data frame handling\nI0828 03:59:30.855675 592 log.go:172] (0x40006ce0a0) (1) Data frame sent\nI0828 03:59:30.857752 592 log.go:172] (0x4000ad8bb0) (0x40006ce0a0) Stream removed, broadcasting: 1\nI0828 03:59:30.860302 592 log.go:172] (0x4000ad8bb0) Go away received\nI0828 03:59:30.865098 592 log.go:172] (0x4000ad8bb0) (0x40006ce0a0) Stream removed, broadcasting: 1\nI0828 03:59:30.865437 592 log.go:172] (0x4000ad8bb0) (0x400082bcc0) Stream removed, broadcasting: 3\nI0828 03:59:30.866358 592 log.go:172] (0x4000ad8bb0) (0x400076e000) Stream removed, broadcasting: 5\n" Aug 28 03:59:30.881: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 03:59:30.882: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 03:59:40.948: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 28 03:59:51.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5599 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 03:59:52.762: INFO: stderr: "I0828 03:59:52.646128 615 log.go:172] (0x40001116b0) (0x4000809a40) Create stream\nI0828 03:59:52.650382 615 log.go:172] (0x40001116b0) (0x4000809a40) Stream added, broadcasting: 1\nI0828 03:59:52.664687 615 log.go:172] (0x40001116b0) Reply frame received for 1\nI0828 03:59:52.665873 615 log.go:172] (0x40001116b0) (0x4000910000) Create stream\nI0828 03:59:52.665962 615 log.go:172] (0x40001116b0) (0x4000910000) Stream added, broadcasting: 3\nI0828 03:59:52.667576 615 log.go:172] (0x40001116b0) Reply frame received for 3\nI0828 03:59:52.667985 615 log.go:172] (0x40001116b0) (0x4000809c20) Create stream\nI0828 03:59:52.668084 615 log.go:172] (0x40001116b0) (0x4000809c20) Stream added, broadcasting: 5\nI0828 03:59:52.669581 615 log.go:172] (0x40001116b0) Reply frame received for 5\nI0828 03:59:52.740675 615 log.go:172] (0x40001116b0) Data frame received for 5\nI0828 03:59:52.741058 615 log.go:172] (0x4000809c20) (5) Data frame handling\nI0828 03:59:52.742204 615 log.go:172] (0x40001116b0) Data frame received for 3\nI0828 03:59:52.742289 615 log.go:172] (0x4000910000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 03:59:52.743115 615 log.go:172] (0x4000809c20) (5) Data frame sent\nI0828 03:59:52.743199 615 log.go:172] (0x4000910000) (3) Data frame sent\nI0828 03:59:52.743324 615 log.go:172] (0x40001116b0) Data frame received for 5\nI0828 03:59:52.743399 615 log.go:172] (0x4000809c20) (5) Data frame handling\nI0828 03:59:52.743501 615 log.go:172] (0x40001116b0) Data frame received for 1\nI0828 03:59:52.743599 615 log.go:172] (0x4000809a40) (1) Data frame handling\nI0828 03:59:52.743717 615 log.go:172] (0x40001116b0) Data frame received for 3\nI0828 03:59:52.743797 615 log.go:172] (0x4000910000) (3) Data frame handling\nI0828 03:59:52.743869 615 log.go:172] (0x4000809a40) (1) Data frame sent\nI0828 03:59:52.745721 615 log.go:172] (0x40001116b0) (0x4000809a40) Stream removed, broadcasting: 1\nI0828 03:59:52.747562 615 log.go:172] (0x40001116b0) Go away received\nI0828 03:59:52.751380 615 log.go:172] (0x40001116b0) (0x4000809a40) Stream removed, broadcasting: 1\nI0828 03:59:52.752003 615 log.go:172] (0x40001116b0) (0x4000910000) Stream removed, broadcasting: 3\nI0828 03:59:52.752230 615 log.go:172] (0x40001116b0) (0x4000809c20) Stream removed, broadcasting: 5\n" Aug 28 03:59:52.763: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 28 03:59:52.763: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 28 04:00:22.878: INFO: Waiting for StatefulSet statefulset-5599/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 28 04:00:32.896: INFO: Deleting all statefulset in ns statefulset-5599 Aug 28 04:00:32.901: INFO: Scaling statefulset ss2 to 0 Aug 28 04:01:02.938: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 04:01:02.942: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:01:02.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5599" for this suite. • [SLOW TEST:177.141 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":33,"skipped":484,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:01:03.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-045f9dd1-080e-4726-a2e4-08232e9693a8 in namespace container-probe-3344 Aug 28 04:01:07.165: INFO: Started pod liveness-045f9dd1-080e-4726-a2e4-08232e9693a8 in namespace container-probe-3344 STEP: checking the pod's current state and verifying that restartCount is present Aug 28 04:01:07.169: INFO: Initial restart count of pod liveness-045f9dd1-080e-4726-a2e4-08232e9693a8 is 0 Aug 28 04:01:25.283: INFO: Restart count of pod container-probe-3344/liveness-045f9dd1-080e-4726-a2e4-08232e9693a8 is now 1 (18.114754777s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:01:25.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3344" for this suite. • [SLOW TEST:22.304 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":498,"failed":0} SSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:01:25.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:01:25.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7002" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":35,"skipped":505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:01:25.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-7c0f6b58-d0ae-40f4-bdbb-bf5ffb5e2581 STEP: Creating a pod to test consume configMaps Aug 28 04:01:27.060: INFO: Waiting up to 5m0s for pod "pod-configmaps-54f4864d-dc51-4586-8f9d-c3382cb4d818" in namespace "configmap-6473" to be "success or failure" Aug 28 04:01:27.179: INFO: Pod "pod-configmaps-54f4864d-dc51-4586-8f9d-c3382cb4d818": Phase="Pending", Reason="", readiness=false. Elapsed: 118.607425ms Aug 28 04:01:29.329: INFO: Pod "pod-configmaps-54f4864d-dc51-4586-8f9d-c3382cb4d818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268616299s Aug 28 04:01:31.336: INFO: Pod "pod-configmaps-54f4864d-dc51-4586-8f9d-c3382cb4d818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.275996383s STEP: Saw pod success Aug 28 04:01:31.336: INFO: Pod "pod-configmaps-54f4864d-dc51-4586-8f9d-c3382cb4d818" satisfied condition "success or failure" Aug 28 04:01:31.364: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-54f4864d-dc51-4586-8f9d-c3382cb4d818 container configmap-volume-test: STEP: delete the pod Aug 28 04:01:31.417: INFO: Waiting for pod pod-configmaps-54f4864d-dc51-4586-8f9d-c3382cb4d818 to disappear Aug 28 04:01:31.422: INFO: Pod pod-configmaps-54f4864d-dc51-4586-8f9d-c3382cb4d818 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:01:31.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6473" for this suite. • [SLOW TEST:5.572 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":545,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:01:31.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-4d8e7a4f-c7e4-4187-a650-00bead433883 STEP: Creating a pod to test consume configMaps Aug 28 04:01:31.749: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7729d038-1b42-4ea6-bb58-aebe8a796719" in namespace "projected-9519" to be "success or failure" Aug 28 04:01:31.819: INFO: Pod "pod-projected-configmaps-7729d038-1b42-4ea6-bb58-aebe8a796719": Phase="Pending", Reason="", readiness=false. Elapsed: 69.877485ms Aug 28 04:01:33.933: INFO: Pod "pod-projected-configmaps-7729d038-1b42-4ea6-bb58-aebe8a796719": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184069056s Aug 28 04:01:35.939: INFO: Pod "pod-projected-configmaps-7729d038-1b42-4ea6-bb58-aebe8a796719": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189730404s STEP: Saw pod success Aug 28 04:01:35.939: INFO: Pod "pod-projected-configmaps-7729d038-1b42-4ea6-bb58-aebe8a796719" satisfied condition "success or failure" Aug 28 04:01:35.943: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-7729d038-1b42-4ea6-bb58-aebe8a796719 container projected-configmap-volume-test: STEP: delete the pod Aug 28 04:01:36.026: INFO: Waiting for pod pod-projected-configmaps-7729d038-1b42-4ea6-bb58-aebe8a796719 to disappear Aug 28 04:01:36.039: INFO: Pod pod-projected-configmaps-7729d038-1b42-4ea6-bb58-aebe8a796719 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:01:36.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9519" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:01:36.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 28 04:01:41.145: INFO: Successfully updated pod "annotationupdate7b59db29-d9be-4339-b9ca-55aae1744763" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:01:45.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3083" for this suite. • [SLOW TEST:9.225 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":583,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:01:45.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Aug 28 04:01:45.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8359' Aug 28 04:01:47.157: INFO: stderr: "" Aug 28 04:01:47.157: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 28 04:01:47.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8359' Aug 28 04:01:48.413: INFO: stderr: "" Aug 28 04:01:48.413: INFO: stdout: "update-demo-nautilus-5l9xg update-demo-nautilus-d86b7 " Aug 28 04:01:48.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5l9xg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:01:49.696: INFO: stderr: "" Aug 28 04:01:49.696: INFO: stdout: "" Aug 28 04:01:49.697: INFO: update-demo-nautilus-5l9xg is created but not running Aug 28 04:01:54.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8359' Aug 28 04:01:55.975: INFO: stderr: "" Aug 28 04:01:55.975: INFO: stdout: "update-demo-nautilus-5l9xg update-demo-nautilus-d86b7 " Aug 28 04:01:55.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5l9xg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:01:57.217: INFO: stderr: "" Aug 28 04:01:57.217: INFO: stdout: "true" Aug 28 04:01:57.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5l9xg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:01:58.508: INFO: stderr: "" Aug 28 04:01:58.509: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 04:01:58.509: INFO: validating pod update-demo-nautilus-5l9xg Aug 28 04:01:58.515: INFO: got data: { "image": "nautilus.jpg" } Aug 28 04:01:58.516: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 04:01:58.517: INFO: update-demo-nautilus-5l9xg is verified up and running Aug 28 04:01:58.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d86b7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:01:59.807: INFO: stderr: "" Aug 28 04:01:59.808: INFO: stdout: "true" Aug 28 04:01:59.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d86b7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:02:01.094: INFO: stderr: "" Aug 28 04:02:01.094: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 04:02:01.094: INFO: validating pod update-demo-nautilus-d86b7 Aug 28 04:02:01.102: INFO: got data: { "image": "nautilus.jpg" } Aug 28 04:02:01.102: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 04:02:01.102: INFO: update-demo-nautilus-d86b7 is verified up and running STEP: rolling-update to new replication controller Aug 28 04:02:01.113: INFO: scanned /root for discovery docs: Aug 28 04:02:01.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8359' Aug 28 04:02:25.640: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 28 04:02:25.641: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 28 04:02:25.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8359' Aug 28 04:02:26.934: INFO: stderr: "" Aug 28 04:02:26.934: INFO: stdout: "update-demo-kitten-4m5sm update-demo-kitten-hcdds " Aug 28 04:02:26.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4m5sm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:02:28.232: INFO: stderr: "" Aug 28 04:02:28.232: INFO: stdout: "true" Aug 28 04:02:28.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4m5sm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:02:29.510: INFO: stderr: "" Aug 28 04:02:29.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 28 04:02:29.510: INFO: validating pod update-demo-kitten-4m5sm Aug 28 04:02:29.516: INFO: got data: { "image": "kitten.jpg" } Aug 28 04:02:29.516: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 28 04:02:29.516: INFO: update-demo-kitten-4m5sm is verified up and running Aug 28 04:02:29.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hcdds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:02:30.803: INFO: stderr: "" Aug 28 04:02:30.803: INFO: stdout: "true" Aug 28 04:02:30.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hcdds -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8359' Aug 28 04:02:32.060: INFO: stderr: "" Aug 28 04:02:32.060: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 28 04:02:32.060: INFO: validating pod update-demo-kitten-hcdds Aug 28 04:02:32.065: INFO: got data: { "image": "kitten.jpg" } Aug 28 04:02:32.065: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 28 04:02:32.065: INFO: update-demo-kitten-hcdds is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:02:32.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8359" for this suite. • [SLOW TEST:46.794 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":39,"skipped":596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:02:32.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 28 04:02:36.208: INFO: &Pod{ObjectMeta:{send-events-e71cfbe9-67cb-4e51-91b5-ac3d0bbaf57c events-7461 /api/v1/namespaces/events-7461/pods/send-events-e71cfbe9-67cb-4e51-91b5-ac3d0bbaf57c 0ade690b-72c1-420f-ade0-3bf494cd21e1 4478003 0 2020-08-28 04:02:32 +0000 UTC map[name:foo time:147720642] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-g9njp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-g9njp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-g9njp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:02:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:02:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:02:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.222,StartTime:2020-08-28 04:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 04:02:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://149989995dfaef8fae53fb0eb71877b36f803f8ee07952ed30b0b6568a17cbe9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 28 04:02:38.308: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 28 04:02:40.317: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:02:40.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7461" for this suite. • [SLOW TEST:8.271 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":40,"skipped":641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:02:40.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:02:40.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c194b54-56e6-403a-80ac-3b7f5ca51fd8" in namespace "downward-api-7886" to be "success or failure" Aug 28 04:02:40.577: INFO: Pod "downwardapi-volume-0c194b54-56e6-403a-80ac-3b7f5ca51fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 92.794398ms Aug 28 04:02:42.584: INFO: Pod "downwardapi-volume-0c194b54-56e6-403a-80ac-3b7f5ca51fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100271949s Aug 28 04:02:44.591: INFO: Pod "downwardapi-volume-0c194b54-56e6-403a-80ac-3b7f5ca51fd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107525793s STEP: Saw pod success Aug 28 04:02:44.592: INFO: Pod "downwardapi-volume-0c194b54-56e6-403a-80ac-3b7f5ca51fd8" satisfied condition "success or failure" Aug 28 04:02:44.597: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0c194b54-56e6-403a-80ac-3b7f5ca51fd8 container client-container: STEP: delete the pod Aug 28 04:02:44.664: INFO: Waiting for pod downwardapi-volume-0c194b54-56e6-403a-80ac-3b7f5ca51fd8 to disappear Aug 28 04:02:44.669: INFO: Pod downwardapi-volume-0c194b54-56e6-403a-80ac-3b7f5ca51fd8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:02:44.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7886" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":700,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:02:44.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:02:44.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5" in namespace "downward-api-592" to be "success or failure" Aug 28 04:02:44.994: INFO: Pod "downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5": Phase="Pending", Reason="", readiness=false. Elapsed: 67.931314ms Aug 28 04:02:47.001: INFO: Pod "downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075452949s Aug 28 04:02:49.008: INFO: Pod "downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5": Phase="Running", Reason="", readiness=true. Elapsed: 4.081896158s Aug 28 04:02:51.015: INFO: Pod "downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088951021s STEP: Saw pod success Aug 28 04:02:51.015: INFO: Pod "downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5" satisfied condition "success or failure" Aug 28 04:02:51.020: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5 container client-container: STEP: delete the pod Aug 28 04:02:51.043: INFO: Waiting for pod downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5 to disappear Aug 28 04:02:51.054: INFO: Pod downwardapi-volume-298ddb60-27c1-4f23-96bd-4694957498e5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:02:51.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-592" for this suite. • [SLOW TEST:6.419 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:02:51.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0828 04:02:51.904791 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 28 04:02:51.905: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:02:51.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2293" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":43,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:02:51.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2747.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2747.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2747.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2747.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2747.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2747.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 04:03:02.242: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:02.246: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:02.250: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:02.254: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:02.266: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:02.269: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:02.273: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:02.277: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:02.286: INFO: Lookups using dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local] Aug 28 04:03:07.294: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:07.299: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:07.302: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:07.307: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:07.319: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:07.322: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:07.326: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:07.330: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:07.339: INFO: Lookups using dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local] Aug 28 04:03:12.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:12.298: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:12.301: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:12.304: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:12.313: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:12.316: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:12.319: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:12.322: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:12.329: INFO: Lookups using dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local] Aug 28 04:03:17.294: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:17.300: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:17.304: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:17.308: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:17.334: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:17.338: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:17.341: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:17.345: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:17.353: INFO: Lookups using dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local] Aug 28 04:03:22.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:22.298: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:22.302: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:22.306: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:22.319: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:22.323: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:22.327: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:22.331: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:22.339: INFO: Lookups using dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local] Aug 28 04:03:27.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:27.297: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:27.301: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:27.305: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:27.326: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:27.330: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:27.334: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:27.337: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local from pod dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce: the server could not find the requested resource (get pods dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce) Aug 28 04:03:27.345: INFO: Lookups using dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2747.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2747.svc.cluster.local jessie_udp@dns-test-service-2.dns-2747.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2747.svc.cluster.local] Aug 28 04:03:32.337: INFO: DNS probes using dns-2747/dns-test-92adfdf4-fdea-4bed-a159-5ae51e987bce succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:03:32.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2747" for this suite. • [SLOW TEST:41.430 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":44,"skipped":762,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:03:33.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b999d664-d717-4922-a453-2c5e085d40b1 STEP: Creating secret with name s-test-opt-upd-3c238dad-edbd-4073-b0f6-4ca1ae590186 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b999d664-d717-4922-a453-2c5e085d40b1 STEP: Updating secret s-test-opt-upd-3c238dad-edbd-4073-b0f6-4ca1ae590186 STEP: Creating secret with name s-test-opt-create-a638c2c8-3f02-4f3a-b106-def1e1602a48 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:03:43.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2141" for this suite. • [SLOW TEST:10.447 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":767,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:03:43.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-wphq STEP: Creating a pod to test atomic-volume-subpath Aug 28 04:03:43.924: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wphq" in namespace "subpath-1304" to be "success or failure" Aug 28 04:03:43.935: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.472417ms Aug 28 04:03:45.976: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051597872s Aug 28 04:03:47.983: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 4.058805909s Aug 28 04:03:49.995: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 6.070964248s Aug 28 04:03:52.001: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 8.076672583s Aug 28 04:03:54.006: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 10.081849325s Aug 28 04:03:56.013: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 12.088433197s Aug 28 04:03:58.028: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 14.103585105s Aug 28 04:04:00.035: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 16.11047574s Aug 28 04:04:02.042: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 18.117739854s Aug 28 04:04:04.050: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 20.125311477s Aug 28 04:04:06.057: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Running", Reason="", readiness=true. Elapsed: 22.132902035s Aug 28 04:04:08.065: INFO: Pod "pod-subpath-test-configmap-wphq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.140217644s STEP: Saw pod success Aug 28 04:04:08.065: INFO: Pod "pod-subpath-test-configmap-wphq" satisfied condition "success or failure" Aug 28 04:04:08.070: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-wphq container test-container-subpath-configmap-wphq: STEP: delete the pod Aug 28 04:04:08.154: INFO: Waiting for pod pod-subpath-test-configmap-wphq to disappear Aug 28 04:04:08.164: INFO: Pod pod-subpath-test-configmap-wphq no longer exists STEP: Deleting pod pod-subpath-test-configmap-wphq Aug 28 04:04:08.164: INFO: Deleting pod "pod-subpath-test-configmap-wphq" in namespace "subpath-1304" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:04:08.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1304" for this suite. • [SLOW TEST:24.377 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":46,"skipped":769,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:04:08.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-784, will wait for the garbage collector to delete the pods Aug 28 04:04:12.337: INFO: Deleting Job.batch foo took: 9.362602ms Aug 28 04:04:12.638: INFO: Terminating Job.batch foo pods took: 300.689346ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:04:51.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-784" for this suite. • [SLOW TEST:43.575 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":47,"skipped":782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:04:51.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Aug 28 04:04:51.874: INFO: Waiting up to 5m0s for pod "var-expansion-b3fe87dc-a2eb-4add-83ab-29d3fad9687b" in namespace "var-expansion-8722" to be "success or failure" Aug 28 04:04:51.879: INFO: Pod "var-expansion-b3fe87dc-a2eb-4add-83ab-29d3fad9687b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414146ms Aug 28 04:04:53.886: INFO: Pod "var-expansion-b3fe87dc-a2eb-4add-83ab-29d3fad9687b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012095024s Aug 28 04:04:56.032: INFO: Pod "var-expansion-b3fe87dc-a2eb-4add-83ab-29d3fad9687b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157726181s STEP: Saw pod success Aug 28 04:04:56.032: INFO: Pod "var-expansion-b3fe87dc-a2eb-4add-83ab-29d3fad9687b" satisfied condition "success or failure" Aug 28 04:04:56.046: INFO: Trying to get logs from node jerma-worker pod var-expansion-b3fe87dc-a2eb-4add-83ab-29d3fad9687b container dapi-container: STEP: delete the pod Aug 28 04:04:56.065: INFO: Waiting for pod var-expansion-b3fe87dc-a2eb-4add-83ab-29d3fad9687b to disappear Aug 28 04:04:56.069: INFO: Pod var-expansion-b3fe87dc-a2eb-4add-83ab-29d3fad9687b no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:04:56.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8722" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":811,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:04:56.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Aug 28 04:04:56.208: INFO: Waiting up to 5m0s for pod "pod-c77cc567-a47d-4658-81c5-d0c67a87d2d3" in namespace "emptydir-4687" to be "success or failure" Aug 28 04:04:56.215: INFO: Pod "pod-c77cc567-a47d-4658-81c5-d0c67a87d2d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138265ms Aug 28 04:04:58.336: INFO: Pod "pod-c77cc567-a47d-4658-81c5-d0c67a87d2d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127602872s Aug 28 04:05:00.343: INFO: Pod "pod-c77cc567-a47d-4658-81c5-d0c67a87d2d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134466765s STEP: Saw pod success Aug 28 04:05:00.343: INFO: Pod "pod-c77cc567-a47d-4658-81c5-d0c67a87d2d3" satisfied condition "success or failure" Aug 28 04:05:00.348: INFO: Trying to get logs from node jerma-worker pod pod-c77cc567-a47d-4658-81c5-d0c67a87d2d3 container test-container: STEP: delete the pod Aug 28 04:05:00.376: INFO: Waiting for pod pod-c77cc567-a47d-4658-81c5-d0c67a87d2d3 to disappear Aug 28 04:05:00.379: INFO: Pod pod-c77cc567-a47d-4658-81c5-d0c67a87d2d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:05:00.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4687" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":812,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:05:00.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run default /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 28 04:05:00.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3920' Aug 28 04:05:04.590: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 28 04:05:04.591: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496 Aug 28 04:05:04.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3920' Aug 28 04:05:05.839: INFO: stderr: "" Aug 28 04:05:05.839: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:05:05.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3920" for this suite. • [SLOW TEST:5.455 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1484 should create an rc or deployment from an image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":50,"skipped":813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:05:05.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-khv8b in namespace proxy-6904 I0828 04:05:06.767928 8 runners.go:189] Created replication controller with name: proxy-service-khv8b, namespace: proxy-6904, replica count: 1 I0828 04:05:07.821556 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0828 04:05:08.822428 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0828 04:05:09.823138 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0828 04:05:10.824114 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:11.824917 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:12.825631 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:13.826236 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:14.826906 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:15.827732 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:16.828404 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:17.829151 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:18.829909 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0828 04:05:19.830652 8 runners.go:189] proxy-service-khv8b Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 28 04:05:19.841: INFO: setup took 13.492791313s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 28 04:05:19.851: INFO: (0) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 8.666031ms) Aug 28 04:05:19.854: INFO: (0) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 11.782713ms) Aug 28 04:05:19.854: INFO: (0) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 12.253046ms) Aug 28 04:05:19.855: INFO: (0) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 12.397508ms) Aug 28 04:05:19.855: INFO: (0) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 13.033577ms) Aug 28 04:05:19.855: INFO: (0) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 12.850774ms) Aug 28 04:05:19.857: INFO: (0) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 15.25146ms) Aug 28 04:05:19.857: INFO: (0) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 14.584232ms) Aug 28 04:05:19.857: INFO: (0) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 15.215202ms) Aug 28 04:05:19.858: INFO: (0) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 15.476191ms) Aug 28 04:05:19.858: INFO: (0) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 12.920105ms) Aug 28 04:05:19.858: INFO: (0) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 16.354996ms) Aug 28 04:05:19.858: INFO: (0) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 15.961215ms) Aug 28 04:05:19.861: INFO: (0) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 19.197555ms) Aug 28 04:05:19.861: INFO: (0) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 18.555209ms) Aug 28 04:05:19.862: INFO: (0) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: ... (200; 4.760637ms) Aug 28 04:05:19.867: INFO: (1) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 4.726669ms) Aug 28 04:05:19.867: INFO: (1) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 5.362294ms) Aug 28 04:05:19.867: INFO: (1) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 5.525494ms) Aug 28 04:05:19.868: INFO: (1) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.373882ms) Aug 28 04:05:19.868: INFO: (1) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 5.92211ms) Aug 28 04:05:19.868: INFO: (1) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test (200; 6.831322ms) Aug 28 04:05:19.869: INFO: (1) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 7.15716ms) Aug 28 04:05:19.869: INFO: (1) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 7.013116ms) Aug 28 04:05:19.870: INFO: (1) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 7.808822ms) Aug 28 04:05:19.873: INFO: (2) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 3.217055ms) Aug 28 04:05:19.874: INFO: (2) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 4.159261ms) Aug 28 04:05:19.875: INFO: (2) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 4.641497ms) Aug 28 04:05:19.875: INFO: (2) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 4.999661ms) Aug 28 04:05:19.875: INFO: (2) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.077542ms) Aug 28 04:05:19.875: INFO: (2) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 5.127383ms) Aug 28 04:05:19.875: INFO: (2) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 5.260465ms) Aug 28 04:05:19.876: INFO: (2) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 5.457221ms) Aug 28 04:05:19.876: INFO: (2) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 5.476294ms) Aug 28 04:05:19.878: INFO: (2) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 7.458692ms) Aug 28 04:05:19.878: INFO: (2) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 7.750458ms) Aug 28 04:05:19.878: INFO: (2) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 8.031281ms) Aug 28 04:05:19.878: INFO: (2) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 7.931204ms) Aug 28 04:05:19.878: INFO: (2) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 8.176646ms) Aug 28 04:05:19.878: INFO: (2) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 8.090425ms) Aug 28 04:05:19.878: INFO: (2) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test (200; 3.869686ms) Aug 28 04:05:19.882: INFO: (3) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 3.988787ms) Aug 28 04:05:19.884: INFO: (3) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 5.748236ms) Aug 28 04:05:19.884: INFO: (3) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 5.865357ms) Aug 28 04:05:19.884: INFO: (3) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 5.822277ms) Aug 28 04:05:19.884: INFO: (3) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.646091ms) Aug 28 04:05:19.885: INFO: (3) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 6.234992ms) Aug 28 04:05:19.885: INFO: (3) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 6.764232ms) Aug 28 04:05:19.885: INFO: (3) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 6.503393ms) Aug 28 04:05:19.885: INFO: (3) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: ... (200; 6.987589ms) Aug 28 04:05:19.886: INFO: (3) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 6.974383ms) Aug 28 04:05:19.890: INFO: (4) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 3.546773ms) Aug 28 04:05:19.890: INFO: (4) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 4.238541ms) Aug 28 04:05:19.891: INFO: (4) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 5.108235ms) Aug 28 04:05:19.892: INFO: (4) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.418407ms) Aug 28 04:05:19.892: INFO: (4) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 5.424184ms) Aug 28 04:05:19.892: INFO: (4) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.539813ms) Aug 28 04:05:19.892: INFO: (4) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 5.511301ms) Aug 28 04:05:19.892: INFO: (4) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 5.555218ms) Aug 28 04:05:19.892: INFO: (4) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 5.935654ms) Aug 28 04:05:19.892: INFO: (4) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test<... (200; 7.244332ms) Aug 28 04:05:19.893: INFO: (4) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 7.171364ms) Aug 28 04:05:19.897: INFO: (5) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 3.803511ms) Aug 28 04:05:19.898: INFO: (5) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 3.652975ms) Aug 28 04:05:19.898: INFO: (5) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 3.928462ms) Aug 28 04:05:19.899: INFO: (5) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 5.779816ms) Aug 28 04:05:19.900: INFO: (5) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 6.001385ms) Aug 28 04:05:19.900: INFO: (5) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 6.159146ms) Aug 28 04:05:19.900: INFO: (5) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 6.255786ms) Aug 28 04:05:19.900: INFO: (5) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 6.249545ms) Aug 28 04:05:19.900: INFO: (5) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 6.617972ms) Aug 28 04:05:19.901: INFO: (5) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 6.632131ms) Aug 28 04:05:19.901: INFO: (5) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 6.818494ms) Aug 28 04:05:19.901: INFO: (5) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test<... (200; 5.429124ms) Aug 28 04:05:19.907: INFO: (6) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.556323ms) Aug 28 04:05:19.907: INFO: (6) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 5.864086ms) Aug 28 04:05:19.907: INFO: (6) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 5.913002ms) Aug 28 04:05:19.908: INFO: (6) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 6.25278ms) Aug 28 04:05:19.908: INFO: (6) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 6.121069ms) Aug 28 04:05:19.908: INFO: (6) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 6.4236ms) Aug 28 04:05:19.908: INFO: (6) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 6.291438ms) Aug 28 04:05:19.914: INFO: (7) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.127645ms) Aug 28 04:05:19.914: INFO: (7) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test<... (200; 6.034494ms) Aug 28 04:05:19.914: INFO: (7) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 6.395187ms) Aug 28 04:05:19.914: INFO: (7) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 6.456419ms) Aug 28 04:05:19.915: INFO: (7) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 6.601656ms) Aug 28 04:05:19.915: INFO: (7) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 6.340497ms) Aug 28 04:05:19.916: INFO: (7) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 7.933098ms) Aug 28 04:05:19.916: INFO: (7) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 7.88849ms) Aug 28 04:05:19.917: INFO: (7) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 8.111913ms) Aug 28 04:05:19.917: INFO: (7) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 8.092027ms) Aug 28 04:05:19.917: INFO: (7) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 8.445028ms) Aug 28 04:05:19.920: INFO: (7) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 11.665661ms) Aug 28 04:05:19.920: INFO: (7) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 11.758481ms) Aug 28 04:05:19.920: INFO: (7) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 11.723726ms) Aug 28 04:05:19.920: INFO: (7) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 11.80492ms) Aug 28 04:05:19.925: INFO: (8) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 4.471494ms) Aug 28 04:05:19.925: INFO: (8) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 4.684863ms) Aug 28 04:05:19.925: INFO: (8) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 4.841277ms) Aug 28 04:05:19.925: INFO: (8) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 4.81034ms) Aug 28 04:05:19.925: INFO: (8) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 5.150464ms) Aug 28 04:05:19.925: INFO: (8) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 5.177449ms) Aug 28 04:05:19.926: INFO: (8) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 5.388311ms) Aug 28 04:05:19.926: INFO: (8) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: ... (200; 5.724875ms) Aug 28 04:05:19.926: INFO: (8) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 6.033184ms) Aug 28 04:05:19.926: INFO: (8) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 6.06147ms) Aug 28 04:05:19.927: INFO: (8) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 6.323746ms) Aug 28 04:05:19.927: INFO: (8) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 4.669781ms) Aug 28 04:05:19.927: INFO: (8) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 6.519213ms) Aug 28 04:05:19.932: INFO: (9) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 4.854437ms) Aug 28 04:05:19.932: INFO: (9) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 5.04249ms) Aug 28 04:05:19.933: INFO: (9) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 5.621465ms) Aug 28 04:05:19.933: INFO: (9) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.784179ms) Aug 28 04:05:19.933: INFO: (9) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 5.612999ms) Aug 28 04:05:19.933: INFO: (9) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 5.773139ms) Aug 28 04:05:19.933: INFO: (9) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 5.916339ms) Aug 28 04:05:19.933: INFO: (9) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 6.140431ms) Aug 28 04:05:19.933: INFO: (9) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 6.129539ms) Aug 28 04:05:19.933: INFO: (9) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 6.26238ms) Aug 28 04:05:19.934: INFO: (9) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 6.227675ms) Aug 28 04:05:19.934: INFO: (9) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 6.241275ms) Aug 28 04:05:19.934: INFO: (9) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 6.858737ms) Aug 28 04:05:19.935: INFO: (9) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 7.786008ms) Aug 28 04:05:19.936: INFO: (9) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test<... (200; 3.222097ms) Aug 28 04:05:19.940: INFO: (10) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 3.649471ms) Aug 28 04:05:19.941: INFO: (10) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 4.483756ms) Aug 28 04:05:19.941: INFO: (10) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 4.710012ms) Aug 28 04:05:19.941: INFO: (10) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 4.452099ms) Aug 28 04:05:19.941: INFO: (10) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 4.741903ms) Aug 28 04:05:19.942: INFO: (10) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 5.613584ms) Aug 28 04:05:19.942: INFO: (10) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 5.703509ms) Aug 28 04:05:19.942: INFO: (10) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 5.672268ms) Aug 28 04:05:19.943: INFO: (10) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 6.83164ms) Aug 28 04:05:19.943: INFO: (10) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 6.779632ms) Aug 28 04:05:19.943: INFO: (10) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 6.565882ms) Aug 28 04:05:19.943: INFO: (10) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 6.650937ms) Aug 28 04:05:19.943: INFO: (10) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 6.73013ms) Aug 28 04:05:19.943: INFO: (10) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test (200; 3.520885ms) Aug 28 04:05:19.947: INFO: (11) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 3.56035ms) Aug 28 04:05:19.947: INFO: (11) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 3.696451ms) Aug 28 04:05:19.948: INFO: (11) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 4.51026ms) Aug 28 04:05:19.948: INFO: (11) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 4.838477ms) Aug 28 04:05:19.948: INFO: (11) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 4.985602ms) Aug 28 04:05:19.949: INFO: (11) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 5.295131ms) Aug 28 04:05:19.949: INFO: (11) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 5.311783ms) Aug 28 04:05:19.949: INFO: (11) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 5.63844ms) Aug 28 04:05:19.949: INFO: (11) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 5.661555ms) Aug 28 04:05:19.949: INFO: (11) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test<... (200; 5.502693ms) Aug 28 04:05:19.956: INFO: (12) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 5.685994ms) Aug 28 04:05:19.956: INFO: (12) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 5.817202ms) Aug 28 04:05:19.957: INFO: (12) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 6.088732ms) Aug 28 04:05:19.957: INFO: (12) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 6.155084ms) Aug 28 04:05:19.957: INFO: (12) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 6.41717ms) Aug 28 04:05:19.960: INFO: (13) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 3.520245ms) Aug 28 04:05:19.961: INFO: (13) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 4.065017ms) Aug 28 04:05:19.961: INFO: (13) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 4.315869ms) Aug 28 04:05:19.962: INFO: (13) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 4.600816ms) Aug 28 04:05:19.962: INFO: (13) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 4.878379ms) Aug 28 04:05:19.962: INFO: (13) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 5.229105ms) Aug 28 04:05:19.962: INFO: (13) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test<... (200; 5.619359ms) Aug 28 04:05:19.963: INFO: (13) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 5.596713ms) Aug 28 04:05:19.963: INFO: (13) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 5.860851ms) Aug 28 04:05:19.963: INFO: (13) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 5.808336ms) Aug 28 04:05:19.963: INFO: (13) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 5.927184ms) Aug 28 04:05:19.963: INFO: (13) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 6.19978ms) Aug 28 04:05:19.967: INFO: (14) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 3.6465ms) Aug 28 04:05:19.967: INFO: (14) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 3.687208ms) Aug 28 04:05:19.968: INFO: (14) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 4.449194ms) Aug 28 04:05:19.968: INFO: (14) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 4.54343ms) Aug 28 04:05:19.968: INFO: (14) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 4.601414ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 6.064897ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 6.102144ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 6.111459ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test (200; 6.343056ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 6.447886ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 4.737619ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 6.347919ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 6.286953ms) Aug 28 04:05:19.970: INFO: (14) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 6.285423ms) Aug 28 04:05:19.974: INFO: (15) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 3.915889ms) Aug 28 04:05:19.974: INFO: (15) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 3.972126ms) Aug 28 04:05:19.974: INFO: (15) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 4.023007ms) Aug 28 04:05:19.974: INFO: (15) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 3.962764ms) Aug 28 04:05:19.974: INFO: (15) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 3.986803ms) Aug 28 04:05:19.975: INFO: (15) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 4.766197ms) Aug 28 04:05:19.975: INFO: (15) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: ... (200; 5.610029ms) Aug 28 04:05:19.976: INFO: (15) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 5.813539ms) Aug 28 04:05:19.977: INFO: (15) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 6.653546ms) Aug 28 04:05:19.977: INFO: (15) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 6.92331ms) Aug 28 04:05:19.978: INFO: (15) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 7.23638ms) Aug 28 04:05:19.981: INFO: (16) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 2.735765ms) Aug 28 04:05:19.984: INFO: (16) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.915269ms) Aug 28 04:05:19.984: INFO: (16) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 6.177412ms) Aug 28 04:05:19.984: INFO: (16) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 5.709313ms) Aug 28 04:05:19.984: INFO: (16) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 6.234938ms) Aug 28 04:05:19.984: INFO: (16) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 6.622014ms) Aug 28 04:05:19.984: INFO: (16) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 6.619626ms) Aug 28 04:05:19.985: INFO: (16) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 6.735277ms) Aug 28 04:05:19.985: INFO: (16) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 6.597916ms) Aug 28 04:05:19.985: INFO: (16) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: ... (200; 3.935895ms) Aug 28 04:05:19.990: INFO: (17) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 4.068563ms) Aug 28 04:05:19.990: INFO: (17) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 3.978176ms) Aug 28 04:05:19.991: INFO: (17) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 4.631876ms) Aug 28 04:05:19.991: INFO: (17) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 5.120666ms) Aug 28 04:05:19.992: INFO: (17) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.419804ms) Aug 28 04:05:19.992: INFO: (17) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 5.186027ms) Aug 28 04:05:19.997: INFO: (17) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 10.505123ms) Aug 28 04:05:19.997: INFO: (17) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 10.360174ms) Aug 28 04:05:19.997: INFO: (17) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 10.645201ms) Aug 28 04:05:19.997: INFO: (17) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 10.902333ms) Aug 28 04:05:19.997: INFO: (17) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 10.856856ms) Aug 28 04:05:19.997: INFO: (17) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 11.129983ms) Aug 28 04:05:19.998: INFO: (17) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: test<... (200; 5.232899ms) Aug 28 04:05:20.004: INFO: (18) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 5.354063ms) Aug 28 04:05:20.004: INFO: (18) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 5.77415ms) Aug 28 04:05:20.004: INFO: (18) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 5.595889ms) Aug 28 04:05:20.004: INFO: (18) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 5.746364ms) Aug 28 04:05:20.005: INFO: (18) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 6.299901ms) Aug 28 04:05:20.005: INFO: (18) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 6.38119ms) Aug 28 04:05:20.005: INFO: (18) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 6.419814ms) Aug 28 04:05:20.005: INFO: (18) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 7.004813ms) Aug 28 04:05:20.009: INFO: (19) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r/proxy/: test (200; 3.128039ms) Aug 28 04:05:20.009: INFO: (19) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:462/proxy/: tls qux (200; 3.359665ms) Aug 28 04:05:20.009: INFO: (19) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 3.588728ms) Aug 28 04:05:20.009: INFO: (19) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname1/proxy/: foo (200; 3.801741ms) Aug 28 04:05:20.009: INFO: (19) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:460/proxy/: tls baz (200; 3.842378ms) Aug 28 04:05:20.009: INFO: (19) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 3.882892ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/services/proxy-service-khv8b:portname2/proxy/: bar (200; 5.235121ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname2/proxy/: tls qux (200; 5.195189ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:160/proxy/: foo (200; 5.307371ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/pods/http:proxy-service-khv8b-hpk2r:1080/proxy/: ... (200; 5.311113ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname1/proxy/: foo (200; 5.36394ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:1080/proxy/: test<... (200; 5.204266ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/services/https:proxy-service-khv8b:tlsportname1/proxy/: tls baz (200; 5.448525ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/pods/proxy-service-khv8b-hpk2r:162/proxy/: bar (200; 4.177495ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/services/http:proxy-service-khv8b:portname2/proxy/: bar (200; 5.593716ms) Aug 28 04:05:20.011: INFO: (19) /api/v1/namespaces/proxy-6904/pods/https:proxy-service-khv8b-hpk2r:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:05:31.908: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 28 04:05:36.922: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 28 04:05:36.922: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 28 04:05:41.146: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9567 /apis/apps/v1/namespaces/deployment-9567/deployments/test-cleanup-deployment 2901238f-eb53-469a-8771-5f6a33491662 4479049 1 2020-08-28 04:05:36 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002927338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-28 04:05:36 +0000 UTC,LastTransitionTime:2020-08-28 04:05:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-08-28 04:05:40 +0000 UTC,LastTransitionTime:2020-08-28 04:05:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 28 04:05:41.154: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-9567 /apis/apps/v1/namespaces/deployment-9567/replicasets/test-cleanup-deployment-55ffc6b7b6 46720bfa-72f4-4371-986f-57df498a8226 4479038 1 2020-08-28 04:05:36 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2901238f-eb53-469a-8771-5f6a33491662 0x40029276e7 0x40029276e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002927758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 28 04:05:41.160: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-kdzmg" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-kdzmg test-cleanup-deployment-55ffc6b7b6- deployment-9567 /api/v1/namespaces/deployment-9567/pods/test-cleanup-deployment-55ffc6b7b6-kdzmg b0ae0cac-e010-42b0-b7ac-8e2bcd446e18 4479037 0 2020-08-28 04:05:36 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 46720bfa-72f4-4371-986f-57df498a8226 0x4002927ac7 0x4002927ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mm9fh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mm9fh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mm9fh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:05:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:05:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.43,StartTime:2020-08-28 04:05:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 04:05:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://871aacd6ad65c2020ce2a3316f6ffea04327cc4888482439368156e9918f4f5e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:05:41.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9567" for this suite. • [SLOW TEST:9.339 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":52,"skipped":889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:05:41.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:05:57.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8327" for this suite. • [SLOW TEST:16.260 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":53,"skipped":970,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:05:57.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:06:00.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:06:02.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734184360, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734184360, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734184360, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734184360, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:06:05.726: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:06:05.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7537" for this suite. STEP: Destroying namespace "webhook-7537-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.651 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":54,"skipped":982,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:06:06.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Aug 28 04:06:06.199: INFO: Waiting up to 5m0s for pod "var-expansion-06b48ea3-470a-4194-a79f-922b697c28ec" in namespace "var-expansion-4336" to be "success or failure" Aug 28 04:06:06.205: INFO: Pod "var-expansion-06b48ea3-470a-4194-a79f-922b697c28ec": Phase="Pending", Reason="", readiness=false. Elapsed: 5.474508ms Aug 28 04:06:08.211: INFO: Pod "var-expansion-06b48ea3-470a-4194-a79f-922b697c28ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012168908s Aug 28 04:06:10.331: INFO: Pod "var-expansion-06b48ea3-470a-4194-a79f-922b697c28ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132330578s STEP: Saw pod success Aug 28 04:06:10.332: INFO: Pod "var-expansion-06b48ea3-470a-4194-a79f-922b697c28ec" satisfied condition "success or failure" Aug 28 04:06:10.355: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-06b48ea3-470a-4194-a79f-922b697c28ec container dapi-container: STEP: delete the pod Aug 28 04:06:10.410: INFO: Waiting for pod var-expansion-06b48ea3-470a-4194-a79f-922b697c28ec to disappear Aug 28 04:06:10.414: INFO: Pod var-expansion-06b48ea3-470a-4194-a79f-922b697c28ec no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:06:10.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4336" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":988,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:06:10.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-1576 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1576 STEP: Deleting pre-stop pod Aug 28 04:06:23.664: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:06:23.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1576" for this suite. • [SLOW TEST:13.288 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":56,"skipped":1021,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:06:23.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:06:34.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6047" for this suite. • [SLOW TEST:11.181 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":57,"skipped":1030,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:06:34.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:06:34.973: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 28 04:06:54.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8849 create -f -' Aug 28 04:06:59.991: INFO: stderr: "" Aug 28 04:06:59.992: INFO: stdout: "e2e-test-crd-publish-openapi-707-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 28 04:06:59.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8849 delete e2e-test-crd-publish-openapi-707-crds test-cr' Aug 28 04:07:01.265: INFO: stderr: "" Aug 28 04:07:01.266: INFO: stdout: "e2e-test-crd-publish-openapi-707-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 28 04:07:01.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8849 apply -f -' Aug 28 04:07:02.847: INFO: stderr: "" Aug 28 04:07:02.847: INFO: stdout: "e2e-test-crd-publish-openapi-707-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 28 04:07:02.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8849 delete e2e-test-crd-publish-openapi-707-crds test-cr' Aug 28 04:07:04.087: INFO: stderr: "" Aug 28 04:07:04.087: INFO: stdout: "e2e-test-crd-publish-openapi-707-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 28 04:07:04.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-707-crds' Aug 28 04:07:05.651: INFO: stderr: "" Aug 28 04:07:05.651: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-707-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:07:25.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8849" for this suite. • [SLOW TEST:50.344 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":58,"skipped":1035,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:07:25.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9852.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9852.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 04:07:33.459: INFO: DNS probes using dns-9852/dns-test-50745fbf-612a-415f-bfc3-62caeae6ac2b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:07:33.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9852" for this suite. • [SLOW TEST:8.281 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":59,"skipped":1051,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:07:33.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:07:34.216: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 28 04:07:34.373: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:34.418: INFO: Number of nodes with available pods: 0 Aug 28 04:07:34.418: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:07:35.909: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:36.141: INFO: Number of nodes with available pods: 0 Aug 28 04:07:36.141: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:07:36.604: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:36.673: INFO: Number of nodes with available pods: 0 Aug 28 04:07:36.673: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:07:37.599: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:37.605: INFO: Number of nodes with available pods: 0 Aug 28 04:07:37.605: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:07:38.478: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:38.483: INFO: Number of nodes with available pods: 0 Aug 28 04:07:38.484: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:07:39.436: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:39.470: INFO: Number of nodes with available pods: 0 Aug 28 04:07:39.470: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:07:40.429: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:40.435: INFO: Number of nodes with available pods: 2 Aug 28 04:07:40.435: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 28 04:07:40.626: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:40.626: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:40.656: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:41.665: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:41.665: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:41.672: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:42.704: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:42.704: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:42.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:43.665: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:43.665: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:43.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:44.664: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:44.664: INFO: Pod daemon-set-ppcbm is not available Aug 28 04:07:44.664: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:44.673: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:45.730: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:45.730: INFO: Pod daemon-set-ppcbm is not available Aug 28 04:07:45.730: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:45.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:46.664: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:46.664: INFO: Pod daemon-set-ppcbm is not available Aug 28 04:07:46.664: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:46.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:47.665: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:47.665: INFO: Pod daemon-set-ppcbm is not available Aug 28 04:07:47.665: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:47.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:48.665: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:48.665: INFO: Pod daemon-set-ppcbm is not available Aug 28 04:07:48.665: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:48.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:49.664: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:49.665: INFO: Pod daemon-set-ppcbm is not available Aug 28 04:07:49.665: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:49.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:50.667: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:50.667: INFO: Pod daemon-set-ppcbm is not available Aug 28 04:07:50.667: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:50.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:51.763: INFO: Wrong image for pod: daemon-set-ppcbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:51.763: INFO: Pod daemon-set-ppcbm is not available Aug 28 04:07:51.763: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:51.780: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:52.665: INFO: Pod daemon-set-bgx9r is not available Aug 28 04:07:52.666: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:52.673: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:53.926: INFO: Pod daemon-set-bgx9r is not available Aug 28 04:07:53.926: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:53.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:54.665: INFO: Pod daemon-set-bgx9r is not available Aug 28 04:07:54.665: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:54.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:55.664: INFO: Pod daemon-set-bgx9r is not available Aug 28 04:07:55.664: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:55.709: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:56.665: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:56.675: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:57.664: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:57.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:58.665: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:58.665: INFO: Pod daemon-set-tnx5r is not available Aug 28 04:07:58.671: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:07:59.664: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:07:59.664: INFO: Pod daemon-set-tnx5r is not available Aug 28 04:07:59.673: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:08:00.664: INFO: Wrong image for pod: daemon-set-tnx5r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 04:08:00.664: INFO: Pod daemon-set-tnx5r is not available Aug 28 04:08:00.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:08:01.705: INFO: Pod daemon-set-lbhcc is not available Aug 28 04:08:01.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 28 04:08:01.727: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:08:01.732: INFO: Number of nodes with available pods: 1 Aug 28 04:08:01.732: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:08:02.743: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:08:02.749: INFO: Number of nodes with available pods: 1 Aug 28 04:08:02.749: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:08:03.743: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:08:03.749: INFO: Number of nodes with available pods: 1 Aug 28 04:08:03.749: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:08:04.743: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:08:04.750: INFO: Number of nodes with available pods: 1 Aug 28 04:08:04.750: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:08:05.743: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:08:05.749: INFO: Number of nodes with available pods: 2 Aug 28 04:08:05.749: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-508, will wait for the garbage collector to delete the pods Aug 28 04:08:05.876: INFO: Deleting DaemonSet.extensions daemon-set took: 8.315403ms Aug 28 04:08:06.676: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.931574ms Aug 28 04:08:11.782: INFO: Number of nodes with available pods: 0 Aug 28 04:08:11.783: INFO: Number of running nodes: 0, number of available pods: 0 Aug 28 04:08:11.787: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-508/daemonsets","resourceVersion":"4479816"},"items":null} Aug 28 04:08:11.790: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-508/pods","resourceVersion":"4479816"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:08:11.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-508" for this suite. • [SLOW TEST:38.303 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":60,"skipped":1066,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:08:11.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-77b97642-374d-4b40-a9c4-052a73b47c0b STEP: Creating a pod to test consume configMaps Aug 28 04:08:11.950: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206" in namespace "configmap-1139" to be "success or failure" Aug 28 04:08:11.983: INFO: Pod "pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206": Phase="Pending", Reason="", readiness=false. Elapsed: 33.281683ms Aug 28 04:08:13.993: INFO: Pod "pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042705555s Aug 28 04:08:16.002: INFO: Pod "pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206": Phase="Running", Reason="", readiness=true. Elapsed: 4.052008473s Aug 28 04:08:18.008: INFO: Pod "pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058095201s STEP: Saw pod success Aug 28 04:08:18.008: INFO: Pod "pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206" satisfied condition "success or failure" Aug 28 04:08:18.012: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206 container configmap-volume-test: STEP: delete the pod Aug 28 04:08:18.118: INFO: Waiting for pod pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206 to disappear Aug 28 04:08:18.182: INFO: Pod pod-configmaps-6d0d45f8-0416-4413-b266-94097fc11206 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:08:18.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1139" for this suite. • [SLOW TEST:6.430 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1068,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:08:18.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:08:23.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6441" for this suite. • [SLOW TEST:5.392 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":62,"skipped":1072,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:08:23.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:08:23.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145" in namespace "projected-3845" to be "success or failure" Aug 28 04:08:23.793: INFO: Pod "downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145": Phase="Pending", Reason="", readiness=false. Elapsed: 57.921049ms Aug 28 04:08:26.008: INFO: Pod "downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273281287s Aug 28 04:08:28.015: INFO: Pod "downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145": Phase="Running", Reason="", readiness=true. Elapsed: 4.280522106s Aug 28 04:08:30.022: INFO: Pod "downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287609255s STEP: Saw pod success Aug 28 04:08:30.023: INFO: Pod "downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145" satisfied condition "success or failure" Aug 28 04:08:30.028: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145 container client-container: STEP: delete the pod Aug 28 04:08:30.067: INFO: Waiting for pod downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145 to disappear Aug 28 04:08:30.136: INFO: Pod downwardapi-volume-01d86ef9-0d2e-403a-9571-ca9519f8e145 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:08:30.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3845" for this suite. • [SLOW TEST:6.540 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:08:30.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 28 04:08:30.354: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-565 /api/v1/namespaces/watch-565/configmaps/e2e-watch-test-label-changed da6a6126-4817-4705-ba33-f8447189726d 4479972 0 2020-08-28 04:08:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 28 04:08:30.356: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-565 /api/v1/namespaces/watch-565/configmaps/e2e-watch-test-label-changed da6a6126-4817-4705-ba33-f8447189726d 4479973 0 2020-08-28 04:08:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 28 04:08:30.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-565 /api/v1/namespaces/watch-565/configmaps/e2e-watch-test-label-changed da6a6126-4817-4705-ba33-f8447189726d 4479974 0 2020-08-28 04:08:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 28 04:08:40.432: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-565 /api/v1/namespaces/watch-565/configmaps/e2e-watch-test-label-changed da6a6126-4817-4705-ba33-f8447189726d 4480013 0 2020-08-28 04:08:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 28 04:08:40.433: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-565 /api/v1/namespaces/watch-565/configmaps/e2e-watch-test-label-changed da6a6126-4817-4705-ba33-f8447189726d 4480014 0 2020-08-28 04:08:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 28 04:08:40.434: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-565 /api/v1/namespaces/watch-565/configmaps/e2e-watch-test-label-changed da6a6126-4817-4705-ba33-f8447189726d 4480015 0 2020-08-28 04:08:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:08:40.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-565" for this suite. • [SLOW TEST:10.255 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":64,"skipped":1114,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:08:40.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4085.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4085.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 04:08:46.838: INFO: DNS probes using dns-test-8769854f-ee3b-43fc-8a86-f0d20c0908e0 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4085.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4085.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 04:08:55.620: INFO: File wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local from pod dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 28 04:08:55.624: INFO: File jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local from pod dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 28 04:08:55.624: INFO: Lookups using dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 failed for: [wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local] Aug 28 04:09:00.632: INFO: File wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local from pod dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 28 04:09:00.637: INFO: File jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local from pod dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 28 04:09:00.637: INFO: Lookups using dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 failed for: [wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local] Aug 28 04:09:05.631: INFO: File wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local from pod dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 28 04:09:05.639: INFO: File jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local from pod dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 28 04:09:05.639: INFO: Lookups using dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 failed for: [wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local] Aug 28 04:09:10.632: INFO: File wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local from pod dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 28 04:09:10.638: INFO: File jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local from pod dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 28 04:09:10.638: INFO: Lookups using dns-4085/dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 failed for: [wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local] Aug 28 04:09:15.636: INFO: DNS probes using dns-test-391cdc05-8d49-4828-8acc-6a036eff8307 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4085.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4085.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4085.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4085.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 04:09:24.328: INFO: DNS probes using dns-test-6948f25f-6621-41f3-a20c-6957760628d7 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:09:24.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4085" for this suite. • [SLOW TEST:44.329 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":65,"skipped":1121,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:09:24.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 28 04:09:24.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4736' Aug 28 04:09:26.201: INFO: stderr: "" Aug 28 04:09:26.201: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Aug 28 04:09:26.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4736' Aug 28 04:09:41.711: INFO: stderr: "" Aug 28 04:09:41.711: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:09:41.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4736" for this suite. • [SLOW TEST:16.930 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":66,"skipped":1133,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:09:41.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:09:42.003: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 28 04:09:42.026: INFO: Number of nodes with available pods: 0 Aug 28 04:09:42.026: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 28 04:09:42.167: INFO: Number of nodes with available pods: 0 Aug 28 04:09:42.167: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:43.173: INFO: Number of nodes with available pods: 0 Aug 28 04:09:43.173: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:44.174: INFO: Number of nodes with available pods: 0 Aug 28 04:09:44.174: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:45.172: INFO: Number of nodes with available pods: 0 Aug 28 04:09:45.172: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:46.305: INFO: Number of nodes with available pods: 0 Aug 28 04:09:46.305: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:47.173: INFO: Number of nodes with available pods: 1 Aug 28 04:09:47.173: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 28 04:09:47.255: INFO: Number of nodes with available pods: 1 Aug 28 04:09:47.255: INFO: Number of running nodes: 0, number of available pods: 1 Aug 28 04:09:48.263: INFO: Number of nodes with available pods: 0 Aug 28 04:09:48.263: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 28 04:09:48.281: INFO: Number of nodes with available pods: 0 Aug 28 04:09:48.281: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:49.288: INFO: Number of nodes with available pods: 0 Aug 28 04:09:49.288: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:50.289: INFO: Number of nodes with available pods: 0 Aug 28 04:09:50.289: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:51.289: INFO: Number of nodes with available pods: 0 Aug 28 04:09:51.289: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:52.288: INFO: Number of nodes with available pods: 0 Aug 28 04:09:52.288: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:53.289: INFO: Number of nodes with available pods: 0 Aug 28 04:09:53.289: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:54.287: INFO: Number of nodes with available pods: 0 Aug 28 04:09:54.287: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:55.288: INFO: Number of nodes with available pods: 0 Aug 28 04:09:55.288: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:56.288: INFO: Number of nodes with available pods: 0 Aug 28 04:09:56.288: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:57.289: INFO: Number of nodes with available pods: 0 Aug 28 04:09:57.290: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:58.286: INFO: Number of nodes with available pods: 0 Aug 28 04:09:58.286: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:09:59.287: INFO: Number of nodes with available pods: 0 Aug 28 04:09:59.287: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:10:00.289: INFO: Number of nodes with available pods: 0 Aug 28 04:10:00.289: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:10:01.315: INFO: Number of nodes with available pods: 0 Aug 28 04:10:01.315: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:10:02.289: INFO: Number of nodes with available pods: 0 Aug 28 04:10:02.289: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:10:03.297: INFO: Number of nodes with available pods: 0 Aug 28 04:10:03.297: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:10:04.288: INFO: Number of nodes with available pods: 0 Aug 28 04:10:04.288: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:10:05.289: INFO: Number of nodes with available pods: 0 Aug 28 04:10:05.289: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:10:06.288: INFO: Number of nodes with available pods: 1 Aug 28 04:10:06.288: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2243, will wait for the garbage collector to delete the pods Aug 28 04:10:06.379: INFO: Deleting DaemonSet.extensions daemon-set took: 29.334978ms Aug 28 04:10:06.679: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.844195ms Aug 28 04:10:11.785: INFO: Number of nodes with available pods: 0 Aug 28 04:10:11.786: INFO: Number of running nodes: 0, number of available pods: 0 Aug 28 04:10:11.791: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2243/daemonsets","resourceVersion":"4480472"},"items":null} Aug 28 04:10:11.795: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2243/pods","resourceVersion":"4480472"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:10:11.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2243" for this suite. • [SLOW TEST:30.139 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":67,"skipped":1148,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:10:11.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-18f7891f-66d4-4e29-b835-7cc644b5c48d STEP: Creating a pod to test consume configMaps Aug 28 04:10:12.064: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-03c23611-f846-4510-bfd9-ff99ae7af3d8" in namespace "projected-1614" to be "success or failure" Aug 28 04:10:12.081: INFO: Pod "pod-projected-configmaps-03c23611-f846-4510-bfd9-ff99ae7af3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.742706ms Aug 28 04:10:14.089: INFO: Pod "pod-projected-configmaps-03c23611-f846-4510-bfd9-ff99ae7af3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024433063s Aug 28 04:10:16.095: INFO: Pod "pod-projected-configmaps-03c23611-f846-4510-bfd9-ff99ae7af3d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03096338s STEP: Saw pod success Aug 28 04:10:16.096: INFO: Pod "pod-projected-configmaps-03c23611-f846-4510-bfd9-ff99ae7af3d8" satisfied condition "success or failure" Aug 28 04:10:16.102: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-03c23611-f846-4510-bfd9-ff99ae7af3d8 container projected-configmap-volume-test: STEP: delete the pod Aug 28 04:10:16.231: INFO: Waiting for pod pod-projected-configmaps-03c23611-f846-4510-bfd9-ff99ae7af3d8 to disappear Aug 28 04:10:16.235: INFO: Pod pod-projected-configmaps-03c23611-f846-4510-bfd9-ff99ae7af3d8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:10:16.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1614" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1163,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:10:16.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0828 04:10:29.416639 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 28 04:10:29.416: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:10:29.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8970" for this suite. • [SLOW TEST:13.393 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":69,"skipped":1176,"failed":0} [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:10:29.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:10:29.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9941' Aug 28 04:10:31.518: INFO: stderr: "" Aug 28 04:10:31.518: INFO: stdout: "replicationcontroller/agnhost-master created\n" Aug 28 04:10:31.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9941' Aug 28 04:10:33.220: INFO: stderr: "" Aug 28 04:10:33.220: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 28 04:10:34.236: INFO: Selector matched 1 pods for map[app:agnhost] Aug 28 04:10:34.236: INFO: Found 0 / 1 Aug 28 04:10:35.288: INFO: Selector matched 1 pods for map[app:agnhost] Aug 28 04:10:35.288: INFO: Found 1 / 1 Aug 28 04:10:35.288: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 28 04:10:35.346: INFO: Selector matched 1 pods for map[app:agnhost] Aug 28 04:10:35.347: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 28 04:10:35.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-sgsld --namespace=kubectl-9941' Aug 28 04:10:36.785: INFO: stderr: "" Aug 28 04:10:36.786: INFO: stdout: "Name: agnhost-master-sgsld\nNamespace: kubectl-9941\nPriority: 0\nNode: jerma-worker/172.18.0.6\nStart Time: Fri, 28 Aug 2020 04:10:31 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.63\nIPs:\n IP: 10.244.2.63\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://7e5f49b05ec87ca7b64a9b780edffc188604fb6d8515f1426c0dd55239d527f7\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 28 Aug 2020 04:10:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-gt567 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-gt567:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-gt567\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-9941/agnhost-master-sgsld to jerma-worker\n Normal Pulled 3s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker Started container agnhost-master\n" Aug 28 04:10:36.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9941' Aug 28 04:10:38.242: INFO: stderr: "" Aug 28 04:10:38.242: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9941\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-sgsld\n" Aug 28 04:10:38.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9941' Aug 28 04:10:39.509: INFO: stderr: "" Aug 28 04:10:39.509: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9941\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.97.250.17\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.63:6379\nSession Affinity: None\nEvents: \n" Aug 28 04:10:39.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Aug 28 04:10:40.850: INFO: stderr: "" Aug 28 04:10:40.850: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:37:06 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Fri, 28 Aug 2020 04:10:36 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 28 Aug 2020 04:08:22 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 28 Aug 2020 04:08:22 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 28 Aug 2020 04:08:22 +0000 Sat, 15 Aug 2020 09:37:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 28 Aug 2020 04:08:22 +0000 Sat, 15 Aug 2020 09:37:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.10\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: e52c45bc589d48d995e8fd79ad5bf250\n System UUID: b981bdc7-d264-48ef-ab5e-3308e23aaf13\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.17.5\n Kube-Proxy Version: v1.17.5\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-bvrm4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system coredns-6955765f44-db8rh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kindnet-j88mt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 12d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-proxy-hmb6l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n local-path-storage local-path-provisioner-58f6947c7-p2cqw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Aug 28 04:10:40.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9941' Aug 28 04:10:42.149: INFO: stderr: "" Aug 28 04:10:42.149: INFO: stdout: "Name: kubectl-9941\nLabels: e2e-framework=kubectl\n e2e-run=54444fcb-452f-4e1d-8ddd-d4cfa5dbceef\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:10:42.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9941" for this suite. • [SLOW TEST:12.548 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":70,"skipped":1176,"failed":0} [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:10:42.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-57aff599-27af-49ce-b5b3-8a271bbb6505 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-57aff599-27af-49ce-b5b3-8a271bbb6505 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:10:48.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8180" for this suite. • [SLOW TEST:6.287 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1176,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:10:48.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:10:48.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a96e6a81-5615-4e84-a6a5-9b7e68a829ba" in namespace "projected-6559" to be "success or failure" Aug 28 04:10:48.781: INFO: Pod "downwardapi-volume-a96e6a81-5615-4e84-a6a5-9b7e68a829ba": Phase="Pending", Reason="", readiness=false. Elapsed: 131.408056ms Aug 28 04:10:50.806: INFO: Pod "downwardapi-volume-a96e6a81-5615-4e84-a6a5-9b7e68a829ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156667362s Aug 28 04:10:52.813: INFO: Pod "downwardapi-volume-a96e6a81-5615-4e84-a6a5-9b7e68a829ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163603622s STEP: Saw pod success Aug 28 04:10:52.813: INFO: Pod "downwardapi-volume-a96e6a81-5615-4e84-a6a5-9b7e68a829ba" satisfied condition "success or failure" Aug 28 04:10:52.818: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a96e6a81-5615-4e84-a6a5-9b7e68a829ba container client-container: STEP: delete the pod Aug 28 04:10:52.853: INFO: Waiting for pod downwardapi-volume-a96e6a81-5615-4e84-a6a5-9b7e68a829ba to disappear Aug 28 04:10:52.997: INFO: Pod downwardapi-volume-a96e6a81-5615-4e84-a6a5-9b7e68a829ba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:10:52.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6559" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1184,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:10:53.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:10:53.243: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d" in namespace "projected-9883" to be "success or failure" Aug 28 04:10:53.266: INFO: Pod "downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.095739ms Aug 28 04:10:55.314: INFO: Pod "downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071353212s Aug 28 04:10:57.393: INFO: Pod "downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d": Phase="Running", Reason="", readiness=true. Elapsed: 4.14961389s Aug 28 04:10:59.398: INFO: Pod "downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.154693196s STEP: Saw pod success Aug 28 04:10:59.398: INFO: Pod "downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d" satisfied condition "success or failure" Aug 28 04:10:59.402: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d container client-container: STEP: delete the pod Aug 28 04:10:59.435: INFO: Waiting for pod downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d to disappear Aug 28 04:10:59.476: INFO: Pod downwardapi-volume-d4168ecc-e745-43db-9dfe-dcd4604eb80d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:10:59.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9883" for this suite. • [SLOW TEST:6.402 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:10:59.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9138 to expose endpoints map[] Aug 28 04:10:59.610: INFO: successfully validated that service multi-endpoint-test in namespace services-9138 exposes endpoints map[] (7.553298ms elapsed) STEP: Creating pod pod1 in namespace services-9138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9138 to expose endpoints map[pod1:[100]] Aug 28 04:11:02.880: INFO: successfully validated that service multi-endpoint-test in namespace services-9138 exposes endpoints map[pod1:[100]] (3.258822125s elapsed) STEP: Creating pod pod2 in namespace services-9138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9138 to expose endpoints map[pod1:[100] pod2:[101]] Aug 28 04:11:07.576: INFO: successfully validated that service multi-endpoint-test in namespace services-9138 exposes endpoints map[pod1:[100] pod2:[101]] (4.689673374s elapsed) STEP: Deleting pod pod1 in namespace services-9138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9138 to expose endpoints map[pod2:[101]] Aug 28 04:11:07.606: INFO: successfully validated that service multi-endpoint-test in namespace services-9138 exposes endpoints map[pod2:[101]] (22.297414ms elapsed) STEP: Deleting pod pod2 in namespace services-9138 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9138 to expose endpoints map[] Aug 28 04:11:07.666: INFO: successfully validated that service multi-endpoint-test in namespace services-9138 exposes endpoints map[] (37.490131ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:11:07.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9138" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:8.382 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":74,"skipped":1245,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:11:07.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:11:08.214: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2602 I0828 04:11:08.278683 8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2602, replica count: 1 I0828 04:11:09.329852 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0828 04:11:10.330576 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0828 04:11:11.331233 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0828 04:11:12.331893 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 28 04:11:12.482: INFO: Created: latency-svc-bxqp5 Aug 28 04:11:12.508: INFO: Got endpoints: latency-svc-bxqp5 [72.685395ms] Aug 28 04:11:12.607: INFO: Created: latency-svc-mcb7m Aug 28 04:11:12.624: INFO: Got endpoints: latency-svc-mcb7m [114.721607ms] Aug 28 04:11:12.711: INFO: Created: latency-svc-gsmjn Aug 28 04:11:12.714: INFO: Got endpoints: latency-svc-gsmjn [205.332656ms] Aug 28 04:11:12.763: INFO: Created: latency-svc-7fmfn Aug 28 04:11:12.774: INFO: Got endpoints: latency-svc-7fmfn [265.71911ms] Aug 28 04:11:12.799: INFO: Created: latency-svc-9zszh Aug 28 04:11:12.866: INFO: Got endpoints: latency-svc-9zszh [356.902259ms] Aug 28 04:11:12.888: INFO: Created: latency-svc-wzh9v Aug 28 04:11:12.903: INFO: Got endpoints: latency-svc-wzh9v [394.595819ms] Aug 28 04:11:12.945: INFO: Created: latency-svc-v2fgk Aug 28 04:11:12.952: INFO: Got endpoints: latency-svc-v2fgk [442.726968ms] Aug 28 04:11:13.009: INFO: Created: latency-svc-5lxmq Aug 28 04:11:13.039: INFO: Got endpoints: latency-svc-5lxmq [529.06458ms] Aug 28 04:11:13.105: INFO: Created: latency-svc-bk7zq Aug 28 04:11:13.148: INFO: Got endpoints: latency-svc-bk7zq [637.773253ms] Aug 28 04:11:13.163: INFO: Created: latency-svc-m7s8s Aug 28 04:11:13.172: INFO: Got endpoints: latency-svc-m7s8s [661.774959ms] Aug 28 04:11:13.194: INFO: Created: latency-svc-qcf7c Aug 28 04:11:13.239: INFO: Got endpoints: latency-svc-qcf7c [729.315549ms] Aug 28 04:11:13.296: INFO: Created: latency-svc-2xtxf Aug 28 04:11:13.297: INFO: Got endpoints: latency-svc-2xtxf [786.091858ms] Aug 28 04:11:13.326: INFO: Created: latency-svc-ll422 Aug 28 04:11:13.335: INFO: Got endpoints: latency-svc-ll422 [823.484204ms] Aug 28 04:11:13.368: INFO: Created: latency-svc-xzfsq Aug 28 04:11:13.379: INFO: Got endpoints: latency-svc-xzfsq [868.919802ms] Aug 28 04:11:13.429: INFO: Created: latency-svc-fmb4s Aug 28 04:11:13.438: INFO: Got endpoints: latency-svc-fmb4s [927.668139ms] Aug 28 04:11:13.458: INFO: Created: latency-svc-2256t Aug 28 04:11:13.461: INFO: Got endpoints: latency-svc-2256t [948.955672ms] Aug 28 04:11:13.495: INFO: Created: latency-svc-4c9nw Aug 28 04:11:13.505: INFO: Got endpoints: latency-svc-4c9nw [880.319104ms] Aug 28 04:11:13.572: INFO: Created: latency-svc-z4wql Aug 28 04:11:13.578: INFO: Got endpoints: latency-svc-z4wql [863.529677ms] Aug 28 04:11:13.645: INFO: Created: latency-svc-xljbd Aug 28 04:11:13.662: INFO: Got endpoints: latency-svc-xljbd [886.81675ms] Aug 28 04:11:13.722: INFO: Created: latency-svc-lz778 Aug 28 04:11:13.728: INFO: Got endpoints: latency-svc-lz778 [861.972383ms] Aug 28 04:11:13.759: INFO: Created: latency-svc-tlbgl Aug 28 04:11:13.770: INFO: Got endpoints: latency-svc-tlbgl [865.986435ms] Aug 28 04:11:13.874: INFO: Created: latency-svc-m6jkf Aug 28 04:11:13.879: INFO: Got endpoints: latency-svc-m6jkf [926.914299ms] Aug 28 04:11:13.902: INFO: Created: latency-svc-rtjvt Aug 28 04:11:13.921: INFO: Got endpoints: latency-svc-rtjvt [881.955381ms] Aug 28 04:11:13.951: INFO: Created: latency-svc-62h6k Aug 28 04:11:13.970: INFO: Got endpoints: latency-svc-62h6k [821.736573ms] Aug 28 04:11:14.059: INFO: Created: latency-svc-b6v4n Aug 28 04:11:14.090: INFO: Got endpoints: latency-svc-b6v4n [917.550552ms] Aug 28 04:11:14.131: INFO: Created: latency-svc-rjkws Aug 28 04:11:14.172: INFO: Got endpoints: latency-svc-rjkws [932.1896ms] Aug 28 04:11:14.202: INFO: Created: latency-svc-bp5h4 Aug 28 04:11:14.217: INFO: Got endpoints: latency-svc-bp5h4 [919.065436ms] Aug 28 04:11:14.306: INFO: Created: latency-svc-4cnnh Aug 28 04:11:14.312: INFO: Got endpoints: latency-svc-4cnnh [976.793384ms] Aug 28 04:11:14.346: INFO: Created: latency-svc-6hsrw Aug 28 04:11:14.363: INFO: Got endpoints: latency-svc-6hsrw [984.46978ms] Aug 28 04:11:14.401: INFO: Created: latency-svc-qlrvv Aug 28 04:11:14.453: INFO: Got endpoints: latency-svc-qlrvv [1.014746859s] Aug 28 04:11:14.489: INFO: Created: latency-svc-gq222 Aug 28 04:11:14.514: INFO: Got endpoints: latency-svc-gq222 [1.052338491s] Aug 28 04:11:14.539: INFO: Created: latency-svc-cfh5g Aug 28 04:11:14.593: INFO: Got endpoints: latency-svc-cfh5g [1.088031876s] Aug 28 04:11:14.628: INFO: Created: latency-svc-vb4rw Aug 28 04:11:14.639: INFO: Got endpoints: latency-svc-vb4rw [1.061347775s] Aug 28 04:11:14.664: INFO: Created: latency-svc-fgcv6 Aug 28 04:11:14.722: INFO: Got endpoints: latency-svc-fgcv6 [1.060343316s] Aug 28 04:11:14.735: INFO: Created: latency-svc-pjdqw Aug 28 04:11:14.747: INFO: Got endpoints: latency-svc-pjdqw [1.019247941s] Aug 28 04:11:14.779: INFO: Created: latency-svc-5mcpp Aug 28 04:11:14.797: INFO: Got endpoints: latency-svc-5mcpp [1.026881468s] Aug 28 04:11:14.867: INFO: Created: latency-svc-c8zds Aug 28 04:11:14.869: INFO: Got endpoints: latency-svc-c8zds [989.674044ms] Aug 28 04:11:14.939: INFO: Created: latency-svc-5n4f4 Aug 28 04:11:15.004: INFO: Got endpoints: latency-svc-5n4f4 [1.082917154s] Aug 28 04:11:15.037: INFO: Created: latency-svc-q7c8q Aug 28 04:11:15.054: INFO: Got endpoints: latency-svc-q7c8q [1.084609323s] Aug 28 04:11:15.163: INFO: Created: latency-svc-wfpns Aug 28 04:11:15.175: INFO: Got endpoints: latency-svc-wfpns [1.085585801s] Aug 28 04:11:15.198: INFO: Created: latency-svc-fmcbc Aug 28 04:11:15.211: INFO: Got endpoints: latency-svc-fmcbc [1.03959553s] Aug 28 04:11:15.267: INFO: Created: latency-svc-flqm7 Aug 28 04:11:15.272: INFO: Got endpoints: latency-svc-flqm7 [1.055230615s] Aug 28 04:11:15.306: INFO: Created: latency-svc-prlbs Aug 28 04:11:15.322: INFO: Got endpoints: latency-svc-prlbs [1.009662695s] Aug 28 04:11:15.441: INFO: Created: latency-svc-xdl52 Aug 28 04:11:15.443: INFO: Got endpoints: latency-svc-xdl52 [1.079609101s] Aug 28 04:11:15.486: INFO: Created: latency-svc-5lz2t Aug 28 04:11:15.501: INFO: Got endpoints: latency-svc-5lz2t [1.048282659s] Aug 28 04:11:15.535: INFO: Created: latency-svc-n4mm2 Aug 28 04:11:15.585: INFO: Got endpoints: latency-svc-n4mm2 [1.070920835s] Aug 28 04:11:15.636: INFO: Created: latency-svc-ljmq4 Aug 28 04:11:15.654: INFO: Got endpoints: latency-svc-ljmq4 [1.060805291s] Aug 28 04:11:15.741: INFO: Created: latency-svc-q4l29 Aug 28 04:11:15.748: INFO: Got endpoints: latency-svc-q4l29 [1.108384281s] Aug 28 04:11:15.810: INFO: Created: latency-svc-m8dtq Aug 28 04:11:15.859: INFO: Got endpoints: latency-svc-m8dtq [1.136947899s] Aug 28 04:11:15.889: INFO: Created: latency-svc-rcm6m Aug 28 04:11:15.931: INFO: Got endpoints: latency-svc-rcm6m [1.183753321s] Aug 28 04:11:16.001: INFO: Created: latency-svc-kxwck Aug 28 04:11:16.019: INFO: Got endpoints: latency-svc-kxwck [1.222268598s] Aug 28 04:11:16.073: INFO: Created: latency-svc-nb5f6 Aug 28 04:11:16.092: INFO: Got endpoints: latency-svc-nb5f6 [1.222943313s] Aug 28 04:11:16.170: INFO: Created: latency-svc-5g4rj Aug 28 04:11:16.183: INFO: Got endpoints: latency-svc-5g4rj [1.179008275s] Aug 28 04:11:16.259: INFO: Created: latency-svc-zhcrf Aug 28 04:11:16.321: INFO: Got endpoints: latency-svc-zhcrf [1.266350299s] Aug 28 04:11:16.350: INFO: Created: latency-svc-5kjf4 Aug 28 04:11:16.381: INFO: Got endpoints: latency-svc-5kjf4 [1.206002706s] Aug 28 04:11:16.402: INFO: Created: latency-svc-r6rdl Aug 28 04:11:16.417: INFO: Got endpoints: latency-svc-r6rdl [1.205801626s] Aug 28 04:11:16.506: INFO: Created: latency-svc-x5sgc Aug 28 04:11:16.526: INFO: Got endpoints: latency-svc-x5sgc [1.253338231s] Aug 28 04:11:16.553: INFO: Created: latency-svc-w9fx9 Aug 28 04:11:16.567: INFO: Got endpoints: latency-svc-w9fx9 [1.24535307s] Aug 28 04:11:16.644: INFO: Created: latency-svc-qsfzd Aug 28 04:11:16.647: INFO: Got endpoints: latency-svc-qsfzd [1.203647194s] Aug 28 04:11:16.673: INFO: Created: latency-svc-bc9jx Aug 28 04:11:16.689: INFO: Got endpoints: latency-svc-bc9jx [1.187615707s] Aug 28 04:11:16.812: INFO: Created: latency-svc-4v8g7 Aug 28 04:11:16.822: INFO: Got endpoints: latency-svc-4v8g7 [1.236547496s] Aug 28 04:11:16.860: INFO: Created: latency-svc-bmxw6 Aug 28 04:11:16.877: INFO: Got endpoints: latency-svc-bmxw6 [1.222438766s] Aug 28 04:11:16.955: INFO: Created: latency-svc-75hwt Aug 28 04:11:16.971: INFO: Got endpoints: latency-svc-75hwt [1.223181348s] Aug 28 04:11:16.998: INFO: Created: latency-svc-8l8f6 Aug 28 04:11:16.999: INFO: Got endpoints: latency-svc-8l8f6 [1.139928693s] Aug 28 04:11:17.064: INFO: Created: latency-svc-4b5pp Aug 28 04:11:17.079: INFO: Got endpoints: latency-svc-4b5pp [1.147778932s] Aug 28 04:11:17.155: INFO: Created: latency-svc-pszj9 Aug 28 04:11:17.162: INFO: Got endpoints: latency-svc-pszj9 [1.142397665s] Aug 28 04:11:17.206: INFO: Created: latency-svc-8kd9p Aug 28 04:11:17.210: INFO: Got endpoints: latency-svc-8kd9p [1.117712364s] Aug 28 04:11:17.261: INFO: Created: latency-svc-5dqf5 Aug 28 04:11:17.271: INFO: Got endpoints: latency-svc-5dqf5 [1.087659432s] Aug 28 04:11:17.297: INFO: Created: latency-svc-674xf Aug 28 04:11:17.345: INFO: Got endpoints: latency-svc-674xf [1.023929857s] Aug 28 04:11:17.357: INFO: Created: latency-svc-rgrzt Aug 28 04:11:17.375: INFO: Got endpoints: latency-svc-rgrzt [993.034167ms] Aug 28 04:11:17.423: INFO: Created: latency-svc-ctknm Aug 28 04:11:17.489: INFO: Got endpoints: latency-svc-ctknm [1.070971346s] Aug 28 04:11:17.508: INFO: Created: latency-svc-7nhrs Aug 28 04:11:17.525: INFO: Got endpoints: latency-svc-7nhrs [998.879901ms] Aug 28 04:11:17.549: INFO: Created: latency-svc-vx6nq Aug 28 04:11:17.567: INFO: Got endpoints: latency-svc-vx6nq [1.000031999s] Aug 28 04:11:17.585: INFO: Created: latency-svc-j6q2p Aug 28 04:11:17.640: INFO: Got endpoints: latency-svc-j6q2p [992.113706ms] Aug 28 04:11:17.705: INFO: Created: latency-svc-kkvdj Aug 28 04:11:17.729: INFO: Got endpoints: latency-svc-kkvdj [1.04031812s] Aug 28 04:11:17.799: INFO: Created: latency-svc-6tcwk Aug 28 04:11:17.856: INFO: Created: latency-svc-8pczt Aug 28 04:11:17.857: INFO: Got endpoints: latency-svc-6tcwk [1.034651205s] Aug 28 04:11:17.861: INFO: Got endpoints: latency-svc-8pczt [983.510167ms] Aug 28 04:11:17.934: INFO: Created: latency-svc-54wqc Aug 28 04:11:17.944: INFO: Got endpoints: latency-svc-54wqc [972.466145ms] Aug 28 04:11:17.981: INFO: Created: latency-svc-24hch Aug 28 04:11:18.001: INFO: Got endpoints: latency-svc-24hch [1.001071771s] Aug 28 04:11:18.076: INFO: Created: latency-svc-4g4lx Aug 28 04:11:18.106: INFO: Created: latency-svc-sqfkk Aug 28 04:11:18.106: INFO: Got endpoints: latency-svc-4g4lx [1.026691999s] Aug 28 04:11:18.122: INFO: Got endpoints: latency-svc-sqfkk [960.213285ms] Aug 28 04:11:18.150: INFO: Created: latency-svc-v9l5t Aug 28 04:11:18.159: INFO: Got endpoints: latency-svc-v9l5t [948.558857ms] Aug 28 04:11:18.214: INFO: Created: latency-svc-hvhlt Aug 28 04:11:18.217: INFO: Got endpoints: latency-svc-hvhlt [945.549805ms] Aug 28 04:11:18.264: INFO: Created: latency-svc-n4557 Aug 28 04:11:18.289: INFO: Got endpoints: latency-svc-n4557 [943.216829ms] Aug 28 04:11:18.398: INFO: Created: latency-svc-x4mfp Aug 28 04:11:18.398: INFO: Got endpoints: latency-svc-x4mfp [1.023538351s] Aug 28 04:11:18.442: INFO: Created: latency-svc-sq2rv Aug 28 04:11:18.467: INFO: Got endpoints: latency-svc-sq2rv [977.890007ms] Aug 28 04:11:18.537: INFO: Created: latency-svc-zj9f2 Aug 28 04:11:18.539: INFO: Got endpoints: latency-svc-zj9f2 [1.014363008s] Aug 28 04:11:18.570: INFO: Created: latency-svc-mh92r Aug 28 04:11:18.586: INFO: Got endpoints: latency-svc-mh92r [1.018594499s] Aug 28 04:11:18.605: INFO: Created: latency-svc-zk87g Aug 28 04:11:18.630: INFO: Got endpoints: latency-svc-zk87g [989.951772ms] Aug 28 04:11:18.762: INFO: Created: latency-svc-qkmcr Aug 28 04:11:18.774: INFO: Got endpoints: latency-svc-qkmcr [1.043978542s] Aug 28 04:11:18.830: INFO: Created: latency-svc-b9gz5 Aug 28 04:11:18.834: INFO: Got endpoints: latency-svc-b9gz5 [976.912239ms] Aug 28 04:11:18.863: INFO: Created: latency-svc-nkk6l Aug 28 04:11:18.883: INFO: Got endpoints: latency-svc-nkk6l [1.022022582s] Aug 28 04:11:18.922: INFO: Created: latency-svc-9j6b4 Aug 28 04:11:18.967: INFO: Got endpoints: latency-svc-9j6b4 [1.023273692s] Aug 28 04:11:18.983: INFO: Created: latency-svc-cvgxf Aug 28 04:11:18.997: INFO: Got endpoints: latency-svc-cvgxf [996.077355ms] Aug 28 04:11:19.025: INFO: Created: latency-svc-hxgbq Aug 28 04:11:19.039: INFO: Got endpoints: latency-svc-hxgbq [932.682298ms] Aug 28 04:11:19.141: INFO: Created: latency-svc-2ckz5 Aug 28 04:11:19.198: INFO: Got endpoints: latency-svc-2ckz5 [1.076115579s] Aug 28 04:11:19.199: INFO: Created: latency-svc-dklpn Aug 28 04:11:19.223: INFO: Got endpoints: latency-svc-dklpn [1.063860764s] Aug 28 04:11:19.287: INFO: Created: latency-svc-pwxxv Aug 28 04:11:19.293: INFO: Got endpoints: latency-svc-pwxxv [1.075814393s] Aug 28 04:11:19.319: INFO: Created: latency-svc-m465h Aug 28 04:11:19.335: INFO: Got endpoints: latency-svc-m465h [1.046067924s] Aug 28 04:11:19.360: INFO: Created: latency-svc-zfsc8 Aug 28 04:11:19.378: INFO: Got endpoints: latency-svc-zfsc8 [979.226615ms] Aug 28 04:11:19.441: INFO: Created: latency-svc-npfnw Aug 28 04:11:19.456: INFO: Got endpoints: latency-svc-npfnw [988.954087ms] Aug 28 04:11:19.504: INFO: Created: latency-svc-p47l7 Aug 28 04:11:19.517: INFO: Got endpoints: latency-svc-p47l7 [977.137891ms] Aug 28 04:11:19.584: INFO: Created: latency-svc-vdfkl Aug 28 04:11:19.592: INFO: Got endpoints: latency-svc-vdfkl [1.005316207s] Aug 28 04:11:19.637: INFO: Created: latency-svc-27vch Aug 28 04:11:19.655: INFO: Got endpoints: latency-svc-27vch [1.025448745s] Aug 28 04:11:19.678: INFO: Created: latency-svc-g8zhh Aug 28 04:11:19.753: INFO: Got endpoints: latency-svc-g8zhh [978.789344ms] Aug 28 04:11:19.755: INFO: Created: latency-svc-79vtm Aug 28 04:11:19.795: INFO: Got endpoints: latency-svc-79vtm [960.83216ms] Aug 28 04:11:19.817: INFO: Created: latency-svc-f4hpz Aug 28 04:11:19.836: INFO: Got endpoints: latency-svc-f4hpz [952.772373ms] Aug 28 04:11:19.932: INFO: Created: latency-svc-d7r4q Aug 28 04:11:19.938: INFO: Got endpoints: latency-svc-d7r4q [970.590209ms] Aug 28 04:11:19.972: INFO: Created: latency-svc-5hlh2 Aug 28 04:11:19.986: INFO: Got endpoints: latency-svc-5hlh2 [989.479687ms] Aug 28 04:11:20.082: INFO: Created: latency-svc-wdbgh Aug 28 04:11:20.089: INFO: Got endpoints: latency-svc-wdbgh [1.050166648s] Aug 28 04:11:20.111: INFO: Created: latency-svc-gdhpx Aug 28 04:11:20.126: INFO: Got endpoints: latency-svc-gdhpx [927.220816ms] Aug 28 04:11:20.163: INFO: Created: latency-svc-rl6tv Aug 28 04:11:20.225: INFO: Got endpoints: latency-svc-rl6tv [1.001426988s] Aug 28 04:11:20.256: INFO: Created: latency-svc-gfm8s Aug 28 04:11:20.283: INFO: Got endpoints: latency-svc-gfm8s [989.515155ms] Aug 28 04:11:20.315: INFO: Created: latency-svc-vvjd6 Aug 28 04:11:20.363: INFO: Got endpoints: latency-svc-vvjd6 [1.027703047s] Aug 28 04:11:20.397: INFO: Created: latency-svc-2fmwd Aug 28 04:11:20.415: INFO: Got endpoints: latency-svc-2fmwd [1.037027182s] Aug 28 04:11:20.441: INFO: Created: latency-svc-dshz6 Aug 28 04:11:20.507: INFO: Got endpoints: latency-svc-dshz6 [1.051112589s] Aug 28 04:11:20.534: INFO: Created: latency-svc-pg4gc Aug 28 04:11:20.548: INFO: Got endpoints: latency-svc-pg4gc [1.030950189s] Aug 28 04:11:20.578: INFO: Created: latency-svc-mnmwz Aug 28 04:11:20.596: INFO: Got endpoints: latency-svc-mnmwz [1.004162493s] Aug 28 04:11:20.644: INFO: Created: latency-svc-s79l8 Aug 28 04:11:20.647: INFO: Got endpoints: latency-svc-s79l8 [991.693935ms] Aug 28 04:11:20.740: INFO: Created: latency-svc-xvn27 Aug 28 04:11:20.788: INFO: Got endpoints: latency-svc-xvn27 [1.035299738s] Aug 28 04:11:20.812: INFO: Created: latency-svc-pzpwm Aug 28 04:11:20.831: INFO: Got endpoints: latency-svc-pzpwm [1.036518156s] Aug 28 04:11:20.860: INFO: Created: latency-svc-w4gz2 Aug 28 04:11:20.880: INFO: Got endpoints: latency-svc-w4gz2 [1.043948342s] Aug 28 04:11:20.932: INFO: Created: latency-svc-nmc2x Aug 28 04:11:20.968: INFO: Created: latency-svc-rjpn7 Aug 28 04:11:20.969: INFO: Got endpoints: latency-svc-nmc2x [1.030075744s] Aug 28 04:11:20.997: INFO: Got endpoints: latency-svc-rjpn7 [1.010431503s] Aug 28 04:11:21.076: INFO: Created: latency-svc-8c9cb Aug 28 04:11:21.082: INFO: Got endpoints: latency-svc-8c9cb [992.434603ms] Aug 28 04:11:21.118: INFO: Created: latency-svc-r5z7f Aug 28 04:11:21.135: INFO: Got endpoints: latency-svc-r5z7f [1.009358395s] Aug 28 04:11:21.226: INFO: Created: latency-svc-d8cnn Aug 28 04:11:21.227: INFO: Got endpoints: latency-svc-d8cnn [1.002367641s] Aug 28 04:11:21.285: INFO: Created: latency-svc-f5bnt Aug 28 04:11:21.305: INFO: Got endpoints: latency-svc-f5bnt [1.021711497s] Aug 28 04:11:21.380: INFO: Created: latency-svc-f9f2f Aug 28 04:11:21.388: INFO: Got endpoints: latency-svc-f9f2f [1.025167194s] Aug 28 04:11:21.413: INFO: Created: latency-svc-zmprv Aug 28 04:11:21.425: INFO: Got endpoints: latency-svc-zmprv [1.010040987s] Aug 28 04:11:21.449: INFO: Created: latency-svc-p658c Aug 28 04:11:21.461: INFO: Got endpoints: latency-svc-p658c [953.421408ms] Aug 28 04:11:21.516: INFO: Created: latency-svc-9rb6k Aug 28 04:11:21.518: INFO: Got endpoints: latency-svc-9rb6k [970.27027ms] Aug 28 04:11:21.543: INFO: Created: latency-svc-bwlrl Aug 28 04:11:21.565: INFO: Got endpoints: latency-svc-bwlrl [968.588001ms] Aug 28 04:11:21.585: INFO: Created: latency-svc-4dpf8 Aug 28 04:11:21.601: INFO: Got endpoints: latency-svc-4dpf8 [953.870784ms] Aug 28 04:11:21.662: INFO: Created: latency-svc-85zwt Aug 28 04:11:21.677: INFO: Got endpoints: latency-svc-85zwt [888.193289ms] Aug 28 04:11:21.711: INFO: Created: latency-svc-94xpp Aug 28 04:11:21.728: INFO: Got endpoints: latency-svc-94xpp [896.243732ms] Aug 28 04:11:21.789: INFO: Created: latency-svc-2k9hv Aug 28 04:11:21.847: INFO: Got endpoints: latency-svc-2k9hv [967.354907ms] Aug 28 04:11:21.944: INFO: Created: latency-svc-s2h9h Aug 28 04:11:21.980: INFO: Got endpoints: latency-svc-s2h9h [1.011052164s] Aug 28 04:11:22.037: INFO: Created: latency-svc-clpj9 Aug 28 04:11:22.093: INFO: Got endpoints: latency-svc-clpj9 [1.096097882s] Aug 28 04:11:22.132: INFO: Created: latency-svc-p6xh2 Aug 28 04:11:22.143: INFO: Got endpoints: latency-svc-p6xh2 [1.060945491s] Aug 28 04:11:22.167: INFO: Created: latency-svc-nbrb6 Aug 28 04:11:22.207: INFO: Got endpoints: latency-svc-nbrb6 [1.071791487s] Aug 28 04:11:22.239: INFO: Created: latency-svc-tvsmb Aug 28 04:11:22.257: INFO: Got endpoints: latency-svc-tvsmb [1.03001134s] Aug 28 04:11:22.350: INFO: Created: latency-svc-jvv5s Aug 28 04:11:22.355: INFO: Got endpoints: latency-svc-jvv5s [1.049862038s] Aug 28 04:11:22.389: INFO: Created: latency-svc-6brcr Aug 28 04:11:22.410: INFO: Got endpoints: latency-svc-6brcr [1.02158872s] Aug 28 04:11:22.506: INFO: Created: latency-svc-5sdtp Aug 28 04:11:22.516: INFO: Got endpoints: latency-svc-5sdtp [1.090963634s] Aug 28 04:11:22.558: INFO: Created: latency-svc-9cqcd Aug 28 04:11:22.580: INFO: Got endpoints: latency-svc-9cqcd [1.118826739s] Aug 28 04:11:22.675: INFO: Created: latency-svc-6kxsn Aug 28 04:11:22.679: INFO: Got endpoints: latency-svc-6kxsn [1.160187103s] Aug 28 04:11:22.720: INFO: Created: latency-svc-9rtpt Aug 28 04:11:22.740: INFO: Got endpoints: latency-svc-9rtpt [1.175048035s] Aug 28 04:11:22.842: INFO: Created: latency-svc-zlv8p Aug 28 04:11:22.845: INFO: Got endpoints: latency-svc-zlv8p [1.243740447s] Aug 28 04:11:22.905: INFO: Created: latency-svc-vqd74 Aug 28 04:11:22.939: INFO: Got endpoints: latency-svc-vqd74 [1.261960465s] Aug 28 04:11:22.989: INFO: Created: latency-svc-5hmh9 Aug 28 04:11:23.004: INFO: Got endpoints: latency-svc-5hmh9 [1.276240609s] Aug 28 04:11:23.044: INFO: Created: latency-svc-gkmkf Aug 28 04:11:23.112: INFO: Got endpoints: latency-svc-gkmkf [1.264953409s] Aug 28 04:11:23.128: INFO: Created: latency-svc-f4r47 Aug 28 04:11:23.168: INFO: Got endpoints: latency-svc-f4r47 [1.188274802s] Aug 28 04:11:23.192: INFO: Created: latency-svc-lr6h9 Aug 28 04:11:23.244: INFO: Got endpoints: latency-svc-lr6h9 [1.150155075s] Aug 28 04:11:23.278: INFO: Created: latency-svc-mdkjd Aug 28 04:11:23.302: INFO: Got endpoints: latency-svc-mdkjd [1.158741341s] Aug 28 04:11:23.325: INFO: Created: latency-svc-l68s8 Aug 28 04:11:23.412: INFO: Got endpoints: latency-svc-l68s8 [1.204161348s] Aug 28 04:11:23.413: INFO: Created: latency-svc-sw7bv Aug 28 04:11:23.427: INFO: Got endpoints: latency-svc-sw7bv [1.169167294s] Aug 28 04:11:23.451: INFO: Created: latency-svc-zcmbs Aug 28 04:11:23.488: INFO: Got endpoints: latency-svc-zcmbs [1.132758578s] Aug 28 04:11:23.542: INFO: Created: latency-svc-pjql4 Aug 28 04:11:23.566: INFO: Got endpoints: latency-svc-pjql4 [1.155953966s] Aug 28 04:11:23.567: INFO: Created: latency-svc-9t777 Aug 28 04:11:23.578: INFO: Got endpoints: latency-svc-9t777 [1.061691795s] Aug 28 04:11:23.612: INFO: Created: latency-svc-7bx8x Aug 28 04:11:23.634: INFO: Got endpoints: latency-svc-7bx8x [1.053350587s] Aug 28 04:11:23.699: INFO: Created: latency-svc-q4729 Aug 28 04:11:23.777: INFO: Got endpoints: latency-svc-q4729 [1.097973274s] Aug 28 04:11:23.830: INFO: Created: latency-svc-wvvbs Aug 28 04:11:23.838: INFO: Got endpoints: latency-svc-wvvbs [1.097750633s] Aug 28 04:11:23.925: INFO: Created: latency-svc-9pttt Aug 28 04:11:23.971: INFO: Got endpoints: latency-svc-9pttt [1.125085166s] Aug 28 04:11:24.028: INFO: Created: latency-svc-znddk Aug 28 04:11:24.050: INFO: Got endpoints: latency-svc-znddk [1.110647262s] Aug 28 04:11:24.124: INFO: Created: latency-svc-vd9mt Aug 28 04:11:24.140: INFO: Got endpoints: latency-svc-vd9mt [1.136004021s] Aug 28 04:11:24.171: INFO: Created: latency-svc-t7tct Aug 28 04:11:24.188: INFO: Got endpoints: latency-svc-t7tct [1.075119492s] Aug 28 04:11:24.273: INFO: Created: latency-svc-nfzhn Aug 28 04:11:24.304: INFO: Got endpoints: latency-svc-nfzhn [1.135347614s] Aug 28 04:11:24.432: INFO: Created: latency-svc-cmj9v Aug 28 04:11:24.444: INFO: Got endpoints: latency-svc-cmj9v [1.200285317s] Aug 28 04:11:24.464: INFO: Created: latency-svc-vt4vs Aug 28 04:11:24.483: INFO: Got endpoints: latency-svc-vt4vs [1.180786779s] Aug 28 04:11:24.508: INFO: Created: latency-svc-ndbtv Aug 28 04:11:24.526: INFO: Got endpoints: latency-svc-ndbtv [1.113937774s] Aug 28 04:11:24.579: INFO: Created: latency-svc-pgr5r Aug 28 04:11:24.583: INFO: Got endpoints: latency-svc-pgr5r [1.155490043s] Aug 28 04:11:24.639: INFO: Created: latency-svc-k5dnl Aug 28 04:11:24.676: INFO: Got endpoints: latency-svc-k5dnl [1.187992964s] Aug 28 04:11:24.758: INFO: Created: latency-svc-nzgpm Aug 28 04:11:24.762: INFO: Got endpoints: latency-svc-nzgpm [1.195257632s] Aug 28 04:11:24.801: INFO: Created: latency-svc-qppxh Aug 28 04:11:24.821: INFO: Got endpoints: latency-svc-qppxh [1.24276066s] Aug 28 04:11:24.854: INFO: Created: latency-svc-l9ltf Aug 28 04:11:24.902: INFO: Got endpoints: latency-svc-l9ltf [1.268670483s] Aug 28 04:11:24.938: INFO: Created: latency-svc-lxl48 Aug 28 04:11:24.982: INFO: Got endpoints: latency-svc-lxl48 [1.204683092s] Aug 28 04:11:25.041: INFO: Created: latency-svc-wpfh7 Aug 28 04:11:25.056: INFO: Got endpoints: latency-svc-wpfh7 [1.218010708s] Aug 28 04:11:25.094: INFO: Created: latency-svc-jn9bb Aug 28 04:11:25.125: INFO: Got endpoints: latency-svc-jn9bb [1.154065995s] Aug 28 04:11:25.196: INFO: Created: latency-svc-hrk8b Aug 28 04:11:25.213: INFO: Got endpoints: latency-svc-hrk8b [1.162598979s] Aug 28 04:11:25.245: INFO: Created: latency-svc-ccx2w Aug 28 04:11:25.261: INFO: Got endpoints: latency-svc-ccx2w [1.120206491s] Aug 28 04:11:25.321: INFO: Created: latency-svc-mgnx2 Aug 28 04:11:25.323: INFO: Got endpoints: latency-svc-mgnx2 [1.135088521s] Aug 28 04:11:25.352: INFO: Created: latency-svc-54bvc Aug 28 04:11:25.389: INFO: Got endpoints: latency-svc-54bvc [1.085005625s] Aug 28 04:11:25.476: INFO: Created: latency-svc-nfg7z Aug 28 04:11:25.504: INFO: Got endpoints: latency-svc-nfg7z [1.056853098s] Aug 28 04:11:25.506: INFO: Created: latency-svc-mtsh5 Aug 28 04:11:25.515: INFO: Got endpoints: latency-svc-mtsh5 [1.032179237s] Aug 28 04:11:25.539: INFO: Created: latency-svc-vpnx9 Aug 28 04:11:25.551: INFO: Got endpoints: latency-svc-vpnx9 [1.025017013s] Aug 28 04:11:25.620: INFO: Created: latency-svc-5ql5s Aug 28 04:11:25.640: INFO: Got endpoints: latency-svc-5ql5s [1.057594482s] Aug 28 04:11:25.678: INFO: Created: latency-svc-56mdp Aug 28 04:11:25.696: INFO: Got endpoints: latency-svc-56mdp [1.019708868s] Aug 28 04:11:25.720: INFO: Created: latency-svc-mv8l6 Aug 28 04:11:25.764: INFO: Got endpoints: latency-svc-mv8l6 [1.001928083s] Aug 28 04:11:25.796: INFO: Created: latency-svc-ftz8b Aug 28 04:11:25.816: INFO: Got endpoints: latency-svc-ftz8b [994.688651ms] Aug 28 04:11:25.838: INFO: Created: latency-svc-czn74 Aug 28 04:11:25.853: INFO: Got endpoints: latency-svc-czn74 [950.362261ms] Aug 28 04:11:25.926: INFO: Created: latency-svc-ljv77 Aug 28 04:11:25.933: INFO: Got endpoints: latency-svc-ljv77 [950.833163ms] Aug 28 04:11:25.959: INFO: Created: latency-svc-24cl7 Aug 28 04:11:25.979: INFO: Got endpoints: latency-svc-24cl7 [922.17961ms] Aug 28 04:11:26.025: INFO: Created: latency-svc-jhqkx Aug 28 04:11:26.077: INFO: Got endpoints: latency-svc-jhqkx [951.575751ms] Aug 28 04:11:26.109: INFO: Created: latency-svc-d26ct Aug 28 04:11:26.124: INFO: Got endpoints: latency-svc-d26ct [910.81698ms] Aug 28 04:11:26.226: INFO: Created: latency-svc-k69ss Aug 28 04:11:26.232: INFO: Got endpoints: latency-svc-k69ss [971.046587ms] Aug 28 04:11:26.266: INFO: Created: latency-svc-tth6n Aug 28 04:11:26.282: INFO: Got endpoints: latency-svc-tth6n [958.836109ms] Aug 28 04:11:26.313: INFO: Created: latency-svc-r7bks Aug 28 04:11:26.387: INFO: Got endpoints: latency-svc-r7bks [997.536591ms] Aug 28 04:11:26.422: INFO: Created: latency-svc-dqdlr Aug 28 04:11:26.432: INFO: Got endpoints: latency-svc-dqdlr [928.156001ms] Aug 28 04:11:26.463: INFO: Created: latency-svc-4kcrf Aug 28 04:11:26.481: INFO: Got endpoints: latency-svc-4kcrf [965.301152ms] Aug 28 04:11:26.536: INFO: Created: latency-svc-qnkjv Aug 28 04:11:26.552: INFO: Got endpoints: latency-svc-qnkjv [1.000398783s] Aug 28 04:11:26.556: INFO: Latencies: [114.721607ms 205.332656ms 265.71911ms 356.902259ms 394.595819ms 442.726968ms 529.06458ms 637.773253ms 661.774959ms 729.315549ms 786.091858ms 821.736573ms 823.484204ms 861.972383ms 863.529677ms 865.986435ms 868.919802ms 880.319104ms 881.955381ms 886.81675ms 888.193289ms 896.243732ms 910.81698ms 917.550552ms 919.065436ms 922.17961ms 926.914299ms 927.220816ms 927.668139ms 928.156001ms 932.1896ms 932.682298ms 943.216829ms 945.549805ms 948.558857ms 948.955672ms 950.362261ms 950.833163ms 951.575751ms 952.772373ms 953.421408ms 953.870784ms 958.836109ms 960.213285ms 960.83216ms 965.301152ms 967.354907ms 968.588001ms 970.27027ms 970.590209ms 971.046587ms 972.466145ms 976.793384ms 976.912239ms 977.137891ms 977.890007ms 978.789344ms 979.226615ms 983.510167ms 984.46978ms 988.954087ms 989.479687ms 989.515155ms 989.674044ms 989.951772ms 991.693935ms 992.113706ms 992.434603ms 993.034167ms 994.688651ms 996.077355ms 997.536591ms 998.879901ms 1.000031999s 1.000398783s 1.001071771s 1.001426988s 1.001928083s 1.002367641s 1.004162493s 1.005316207s 1.009358395s 1.009662695s 1.010040987s 1.010431503s 1.011052164s 1.014363008s 1.014746859s 1.018594499s 1.019247941s 1.019708868s 1.02158872s 1.021711497s 1.022022582s 1.023273692s 1.023538351s 1.023929857s 1.025017013s 1.025167194s 1.025448745s 1.026691999s 1.026881468s 1.027703047s 1.03001134s 1.030075744s 1.030950189s 1.032179237s 1.034651205s 1.035299738s 1.036518156s 1.037027182s 1.03959553s 1.04031812s 1.043948342s 1.043978542s 1.046067924s 1.048282659s 1.049862038s 1.050166648s 1.051112589s 1.052338491s 1.053350587s 1.055230615s 1.056853098s 1.057594482s 1.060343316s 1.060805291s 1.060945491s 1.061347775s 1.061691795s 1.063860764s 1.070920835s 1.070971346s 1.071791487s 1.075119492s 1.075814393s 1.076115579s 1.079609101s 1.082917154s 1.084609323s 1.085005625s 1.085585801s 1.087659432s 1.088031876s 1.090963634s 1.096097882s 1.097750633s 1.097973274s 1.108384281s 1.110647262s 1.113937774s 1.117712364s 1.118826739s 1.120206491s 1.125085166s 1.132758578s 1.135088521s 1.135347614s 1.136004021s 1.136947899s 1.139928693s 1.142397665s 1.147778932s 1.150155075s 1.154065995s 1.155490043s 1.155953966s 1.158741341s 1.160187103s 1.162598979s 1.169167294s 1.175048035s 1.179008275s 1.180786779s 1.183753321s 1.187615707s 1.187992964s 1.188274802s 1.195257632s 1.200285317s 1.203647194s 1.204161348s 1.204683092s 1.205801626s 1.206002706s 1.218010708s 1.222268598s 1.222438766s 1.222943313s 1.223181348s 1.236547496s 1.24276066s 1.243740447s 1.24535307s 1.253338231s 1.261960465s 1.264953409s 1.266350299s 1.268670483s 1.276240609s] Aug 28 04:11:26.557: INFO: 50 %ile: 1.026691999s Aug 28 04:11:26.557: INFO: 90 %ile: 1.203647194s Aug 28 04:11:26.557: INFO: 99 %ile: 1.268670483s Aug 28 04:11:26.557: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:11:26.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2602" for this suite. • [SLOW TEST:18.720 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":75,"skipped":1246,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:11:26.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Aug 28 04:11:26.684: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix149835141/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:11:27.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5740" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":76,"skipped":1248,"failed":0} S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:11:27.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5093.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5093.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5093.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5093.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5093.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5093.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 04:11:36.891: INFO: DNS probes using dns-5093/dns-test-3eb86d7a-14b1-4171-a61d-f24bfb806c09 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:11:37.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5093" for this suite. • [SLOW TEST:9.752 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":77,"skipped":1249,"failed":0} S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:11:37.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 28 04:11:38.016: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:11:51.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7781" for this suite. • [SLOW TEST:14.232 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:11:51.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0828 04:12:21.936540 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 28 04:12:21.936: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:12:21.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1766" for this suite. • [SLOW TEST:30.184 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":79,"skipped":1309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:12:21.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Aug 28 04:12:22.083: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Aug 28 04:12:22.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9442' Aug 28 04:12:23.716: INFO: stderr: "" Aug 28 04:12:23.716: INFO: stdout: "service/agnhost-slave created\n" Aug 28 04:12:23.717: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Aug 28 04:12:23.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9442' Aug 28 04:12:25.445: INFO: stderr: "" Aug 28 04:12:25.445: INFO: stdout: "service/agnhost-master created\n" Aug 28 04:12:25.446: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 28 04:12:25.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9442' Aug 28 04:12:27.224: INFO: stderr: "" Aug 28 04:12:27.224: INFO: stdout: "service/frontend created\n" Aug 28 04:12:27.226: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 28 04:12:27.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9442' Aug 28 04:12:28.950: INFO: stderr: "" Aug 28 04:12:28.950: INFO: stdout: "deployment.apps/frontend created\n" Aug 28 04:12:28.951: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 28 04:12:28.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9442' Aug 28 04:12:30.551: INFO: stderr: "" Aug 28 04:12:30.552: INFO: stdout: "deployment.apps/agnhost-master created\n" Aug 28 04:12:30.554: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 28 04:12:30.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9442' Aug 28 04:12:32.937: INFO: stderr: "" Aug 28 04:12:32.937: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Aug 28 04:12:32.938: INFO: Waiting for all frontend pods to be Running. Aug 28 04:12:37.990: INFO: Waiting for frontend to serve content. Aug 28 04:12:39.184: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Aug 28 04:12:44.196: INFO: Trying to add a new entry to the guestbook. Aug 28 04:12:44.207: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 28 04:12:44.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9442' Aug 28 04:12:45.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 04:12:45.502: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Aug 28 04:12:45.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9442' Aug 28 04:12:46.857: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 04:12:46.857: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Aug 28 04:12:46.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9442' Aug 28 04:12:48.165: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 04:12:48.165: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 28 04:12:48.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9442' Aug 28 04:12:49.470: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 04:12:49.471: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 28 04:12:49.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9442' Aug 28 04:12:50.693: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 04:12:50.693: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Aug 28 04:12:50.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9442' Aug 28 04:12:51.953: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 04:12:51.953: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:12:51.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9442" for this suite. • [SLOW TEST:30.535 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381 should create and stop a working application [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":80,"skipped":1348,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:12:52.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 28 04:12:52.910: INFO: Waiting up to 5m0s for pod "pod-2cd31175-3fb4-4d9b-89ca-2623269f9778" in namespace "emptydir-7686" to be "success or failure" Aug 28 04:12:53.191: INFO: Pod "pod-2cd31175-3fb4-4d9b-89ca-2623269f9778": Phase="Pending", Reason="", readiness=false. Elapsed: 280.320123ms Aug 28 04:12:55.230: INFO: Pod "pod-2cd31175-3fb4-4d9b-89ca-2623269f9778": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319561209s Aug 28 04:12:57.375: INFO: Pod "pod-2cd31175-3fb4-4d9b-89ca-2623269f9778": Phase="Running", Reason="", readiness=true. Elapsed: 4.465020238s Aug 28 04:12:59.380: INFO: Pod "pod-2cd31175-3fb4-4d9b-89ca-2623269f9778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.469886203s STEP: Saw pod success Aug 28 04:12:59.381: INFO: Pod "pod-2cd31175-3fb4-4d9b-89ca-2623269f9778" satisfied condition "success or failure" Aug 28 04:12:59.385: INFO: Trying to get logs from node jerma-worker pod pod-2cd31175-3fb4-4d9b-89ca-2623269f9778 container test-container: STEP: delete the pod Aug 28 04:12:59.410: INFO: Waiting for pod pod-2cd31175-3fb4-4d9b-89ca-2623269f9778 to disappear Aug 28 04:12:59.451: INFO: Pod pod-2cd31175-3fb4-4d9b-89ca-2623269f9778 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:12:59.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7686" for this suite. • [SLOW TEST:7.056 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1353,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:12:59.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6844 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6844 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6844 Aug 28 04:12:59.745: INFO: Found 0 stateful pods, waiting for 1 Aug 28 04:13:09.750: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 28 04:13:09.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 04:13:11.283: INFO: stderr: "I0828 04:13:11.095061 1647 log.go:172] (0x4000728e70) (0x40007201e0) Create stream\nI0828 04:13:11.098143 1647 log.go:172] (0x4000728e70) (0x40007201e0) Stream added, broadcasting: 1\nI0828 04:13:11.119701 1647 log.go:172] (0x4000728e70) Reply frame received for 1\nI0828 04:13:11.120514 1647 log.go:172] (0x4000728e70) (0x4000720280) Create stream\nI0828 04:13:11.120596 1647 log.go:172] (0x4000728e70) (0x4000720280) Stream added, broadcasting: 3\nI0828 04:13:11.122940 1647 log.go:172] (0x4000728e70) Reply frame received for 3\nI0828 04:13:11.123265 1647 log.go:172] (0x4000728e70) (0x40007ee000) Create stream\nI0828 04:13:11.123337 1647 log.go:172] (0x4000728e70) (0x40007ee000) Stream added, broadcasting: 5\nI0828 04:13:11.129556 1647 log.go:172] (0x4000728e70) Reply frame received for 5\nI0828 04:13:11.200523 1647 log.go:172] (0x4000728e70) Data frame received for 5\nI0828 04:13:11.200710 1647 log.go:172] (0x40007ee000) (5) Data frame handling\nI0828 04:13:11.201073 1647 log.go:172] (0x40007ee000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 04:13:11.260886 1647 log.go:172] (0x4000728e70) Data frame received for 3\nI0828 04:13:11.261151 1647 log.go:172] (0x4000720280) (3) Data frame handling\nI0828 04:13:11.261362 1647 log.go:172] (0x4000728e70) Data frame received for 5\nI0828 04:13:11.261536 1647 log.go:172] (0x40007ee000) (5) Data frame handling\nI0828 04:13:11.261667 1647 log.go:172] (0x4000720280) (3) Data frame sent\nI0828 04:13:11.261834 1647 log.go:172] (0x4000728e70) Data frame received for 3\nI0828 04:13:11.261979 1647 log.go:172] (0x4000720280) (3) Data frame handling\nI0828 04:13:11.262227 1647 log.go:172] (0x4000728e70) Data frame received for 1\nI0828 04:13:11.262383 1647 log.go:172] (0x40007201e0) (1) Data frame handling\nI0828 04:13:11.262572 1647 log.go:172] (0x40007201e0) (1) Data frame sent\nI0828 04:13:11.264029 1647 log.go:172] (0x4000728e70) (0x40007201e0) Stream removed, broadcasting: 1\nI0828 04:13:11.266516 1647 log.go:172] (0x4000728e70) Go away received\nI0828 04:13:11.270595 1647 log.go:172] (0x4000728e70) (0x40007201e0) Stream removed, broadcasting: 1\nI0828 04:13:11.271733 1647 log.go:172] (0x4000728e70) (0x4000720280) Stream removed, broadcasting: 3\nI0828 04:13:11.272364 1647 log.go:172] (0x4000728e70) (0x40007ee000) Stream removed, broadcasting: 5\n" Aug 28 04:13:11.284: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 04:13:11.285: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 04:13:11.297: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 28 04:13:21.346: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 28 04:13:21.346: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 04:13:21.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999929269s Aug 28 04:13:22.389: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992395852s Aug 28 04:13:23.586: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.972573662s Aug 28 04:13:24.602: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.775199069s Aug 28 04:13:25.611: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.759176409s Aug 28 04:13:26.618: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.750897265s Aug 28 04:13:27.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.743294442s Aug 28 04:13:28.633: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.735782781s Aug 28 04:13:29.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.728419953s Aug 28 04:13:30.650: INFO: Verifying statefulset ss doesn't scale past 1 for another 720.360768ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6844 Aug 28 04:13:31.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:13:33.114: INFO: stderr: "I0828 04:13:33.008583 1670 log.go:172] (0x40003d8000) (0x4000809a40) Create stream\nI0828 04:13:33.011450 1670 log.go:172] (0x40003d8000) (0x4000809a40) Stream added, broadcasting: 1\nI0828 04:13:33.020899 1670 log.go:172] (0x40003d8000) Reply frame received for 1\nI0828 04:13:33.021723 1670 log.go:172] (0x40003d8000) (0x4000764000) Create stream\nI0828 04:13:33.021798 1670 log.go:172] (0x40003d8000) (0x4000764000) Stream added, broadcasting: 3\nI0828 04:13:33.023696 1670 log.go:172] (0x40003d8000) Reply frame received for 3\nI0828 04:13:33.024139 1670 log.go:172] (0x40003d8000) (0x40007640a0) Create stream\nI0828 04:13:33.024232 1670 log.go:172] (0x40003d8000) (0x40007640a0) Stream added, broadcasting: 5\nI0828 04:13:33.025731 1670 log.go:172] (0x40003d8000) Reply frame received for 5\nI0828 04:13:33.093304 1670 log.go:172] (0x40003d8000) Data frame received for 3\nI0828 04:13:33.093642 1670 log.go:172] (0x40003d8000) Data frame received for 5\nI0828 04:13:33.093871 1670 log.go:172] (0x4000764000) (3) Data frame handling\nI0828 04:13:33.093994 1670 log.go:172] (0x40007640a0) (5) Data frame handling\nI0828 04:13:33.094112 1670 log.go:172] (0x40003d8000) Data frame received for 1\nI0828 04:13:33.094258 1670 log.go:172] (0x4000809a40) (1) Data frame handling\nI0828 04:13:33.095117 1670 log.go:172] (0x40007640a0) (5) Data frame sent\nI0828 04:13:33.095303 1670 log.go:172] (0x4000809a40) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 04:13:33.095618 1670 log.go:172] (0x40003d8000) Data frame received for 5\nI0828 04:13:33.095728 1670 log.go:172] (0x40007640a0) (5) Data frame handling\nI0828 04:13:33.095862 1670 log.go:172] (0x4000764000) (3) Data frame sent\nI0828 04:13:33.095969 1670 log.go:172] (0x40003d8000) Data frame received for 3\nI0828 04:13:33.096033 1670 log.go:172] (0x4000764000) (3) Data frame handling\nI0828 04:13:33.099198 1670 log.go:172] (0x40003d8000) (0x4000809a40) Stream removed, broadcasting: 1\nI0828 04:13:33.100149 1670 log.go:172] (0x40003d8000) Go away received\nI0828 04:13:33.103346 1670 log.go:172] (0x40003d8000) (0x4000809a40) Stream removed, broadcasting: 1\nI0828 04:13:33.103997 1670 log.go:172] (0x40003d8000) (0x4000764000) Stream removed, broadcasting: 3\nI0828 04:13:33.104256 1670 log.go:172] (0x40003d8000) (0x40007640a0) Stream removed, broadcasting: 5\n" Aug 28 04:13:33.115: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 28 04:13:33.115: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 28 04:13:33.122: INFO: Found 1 stateful pods, waiting for 3 Aug 28 04:13:43.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 28 04:13:43.131: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 28 04:13:43.131: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 28 04:13:43.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 04:13:44.637: INFO: stderr: "I0828 04:13:44.493884 1694 log.go:172] (0x4000a76160) (0x40007fbb80) Create stream\nI0828 04:13:44.497495 1694 log.go:172] (0x4000a76160) (0x40007fbb80) Stream added, broadcasting: 1\nI0828 04:13:44.510808 1694 log.go:172] (0x4000a76160) Reply frame received for 1\nI0828 04:13:44.512157 1694 log.go:172] (0x4000a76160) (0x4000818000) Create stream\nI0828 04:13:44.512286 1694 log.go:172] (0x4000a76160) (0x4000818000) Stream added, broadcasting: 3\nI0828 04:13:44.514409 1694 log.go:172] (0x4000a76160) Reply frame received for 3\nI0828 04:13:44.514925 1694 log.go:172] (0x4000a76160) (0x40008180a0) Create stream\nI0828 04:13:44.515028 1694 log.go:172] (0x4000a76160) (0x40008180a0) Stream added, broadcasting: 5\nI0828 04:13:44.516579 1694 log.go:172] (0x4000a76160) Reply frame received for 5\nI0828 04:13:44.610146 1694 log.go:172] (0x4000a76160) Data frame received for 3\nI0828 04:13:44.610387 1694 log.go:172] (0x4000a76160) Data frame received for 5\nI0828 04:13:44.610632 1694 log.go:172] (0x40008180a0) (5) Data frame handling\nI0828 04:13:44.610892 1694 log.go:172] (0x4000a76160) Data frame received for 1\nI0828 04:13:44.611064 1694 log.go:172] (0x40007fbb80) (1) Data frame handling\nI0828 04:13:44.611306 1694 log.go:172] (0x4000818000) (3) Data frame handling\nI0828 04:13:44.612215 1694 log.go:172] (0x40008180a0) (5) Data frame sent\nI0828 04:13:44.612348 1694 log.go:172] (0x4000818000) (3) Data frame sent\nI0828 04:13:44.612422 1694 log.go:172] (0x40007fbb80) (1) Data frame sent\nI0828 04:13:44.612519 1694 log.go:172] (0x4000a76160) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 04:13:44.612589 1694 log.go:172] (0x4000818000) (3) Data frame handling\nI0828 04:13:44.612657 1694 log.go:172] (0x4000a76160) Data frame received for 5\nI0828 04:13:44.612799 1694 log.go:172] (0x40008180a0) (5) Data frame handling\nI0828 04:13:44.614302 1694 log.go:172] (0x4000a76160) (0x40007fbb80) Stream removed, broadcasting: 1\nI0828 04:13:44.616230 1694 log.go:172] (0x4000a76160) Go away received\nI0828 04:13:44.618981 1694 log.go:172] (0x4000a76160) (0x40007fbb80) Stream removed, broadcasting: 1\nI0828 04:13:44.619490 1694 log.go:172] (0x4000a76160) (0x4000818000) Stream removed, broadcasting: 3\nI0828 04:13:44.619648 1694 log.go:172] (0x4000a76160) (0x40008180a0) Stream removed, broadcasting: 5\n" Aug 28 04:13:44.637: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 04:13:44.637: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 04:13:44.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 04:13:46.150: INFO: stderr: "I0828 04:13:45.973638 1717 log.go:172] (0x4000a2aa50) (0x4000b48000) Create stream\nI0828 04:13:45.977433 1717 log.go:172] (0x4000a2aa50) (0x4000b48000) Stream added, broadcasting: 1\nI0828 04:13:45.988167 1717 log.go:172] (0x4000a2aa50) Reply frame received for 1\nI0828 04:13:45.989140 1717 log.go:172] (0x4000a2aa50) (0x4000be4000) Create stream\nI0828 04:13:45.989213 1717 log.go:172] (0x4000a2aa50) (0x4000be4000) Stream added, broadcasting: 3\nI0828 04:13:45.990756 1717 log.go:172] (0x4000a2aa50) Reply frame received for 3\nI0828 04:13:45.991272 1717 log.go:172] (0x4000a2aa50) (0x4000b48140) Create stream\nI0828 04:13:45.991367 1717 log.go:172] (0x4000a2aa50) (0x4000b48140) Stream added, broadcasting: 5\nI0828 04:13:45.993351 1717 log.go:172] (0x4000a2aa50) Reply frame received for 5\nI0828 04:13:46.076520 1717 log.go:172] (0x4000a2aa50) Data frame received for 5\nI0828 04:13:46.076816 1717 log.go:172] (0x4000b48140) (5) Data frame handling\nI0828 04:13:46.077308 1717 log.go:172] (0x4000b48140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 04:13:46.125311 1717 log.go:172] (0x4000a2aa50) Data frame received for 3\nI0828 04:13:46.125476 1717 log.go:172] (0x4000be4000) (3) Data frame handling\nI0828 04:13:46.125571 1717 log.go:172] (0x4000be4000) (3) Data frame sent\nI0828 04:13:46.125645 1717 log.go:172] (0x4000a2aa50) Data frame received for 3\nI0828 04:13:46.125734 1717 log.go:172] (0x4000a2aa50) Data frame received for 5\nI0828 04:13:46.125845 1717 log.go:172] (0x4000b48140) (5) Data frame handling\nI0828 04:13:46.126042 1717 log.go:172] (0x4000be4000) (3) Data frame handling\nI0828 04:13:46.127628 1717 log.go:172] (0x4000a2aa50) Data frame received for 1\nI0828 04:13:46.127699 1717 log.go:172] (0x4000b48000) (1) Data frame handling\nI0828 04:13:46.127771 1717 log.go:172] (0x4000b48000) (1) Data frame sent\nI0828 04:13:46.129391 1717 log.go:172] (0x4000a2aa50) (0x4000b48000) Stream removed, broadcasting: 1\nI0828 04:13:46.134345 1717 log.go:172] (0x4000a2aa50) Go away received\nI0828 04:13:46.137597 1717 log.go:172] (0x4000a2aa50) (0x4000b48000) Stream removed, broadcasting: 1\nI0828 04:13:46.138533 1717 log.go:172] (0x4000a2aa50) (0x4000be4000) Stream removed, broadcasting: 3\nI0828 04:13:46.139256 1717 log.go:172] (0x4000a2aa50) (0x4000b48140) Stream removed, broadcasting: 5\n" Aug 28 04:13:46.151: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 04:13:46.151: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 04:13:46.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 04:13:47.666: INFO: stderr: "I0828 04:13:47.474561 1740 log.go:172] (0x4000ab2bb0) (0x400074e1e0) Create stream\nI0828 04:13:47.481146 1740 log.go:172] (0x4000ab2bb0) (0x400074e1e0) Stream added, broadcasting: 1\nI0828 04:13:47.499763 1740 log.go:172] (0x4000ab2bb0) Reply frame received for 1\nI0828 04:13:47.500805 1740 log.go:172] (0x4000ab2bb0) (0x4000828000) Create stream\nI0828 04:13:47.500891 1740 log.go:172] (0x4000ab2bb0) (0x4000828000) Stream added, broadcasting: 3\nI0828 04:13:47.502681 1740 log.go:172] (0x4000ab2bb0) Reply frame received for 3\nI0828 04:13:47.502949 1740 log.go:172] (0x4000ab2bb0) (0x400085c000) Create stream\nI0828 04:13:47.503010 1740 log.go:172] (0x4000ab2bb0) (0x400085c000) Stream added, broadcasting: 5\nI0828 04:13:47.504645 1740 log.go:172] (0x4000ab2bb0) Reply frame received for 5\nI0828 04:13:47.597610 1740 log.go:172] (0x4000ab2bb0) Data frame received for 5\nI0828 04:13:47.597855 1740 log.go:172] (0x400085c000) (5) Data frame handling\nI0828 04:13:47.598328 1740 log.go:172] (0x400085c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 04:13:47.629981 1740 log.go:172] (0x4000ab2bb0) Data frame received for 3\nI0828 04:13:47.630144 1740 log.go:172] (0x4000828000) (3) Data frame handling\nI0828 04:13:47.630227 1740 log.go:172] (0x4000828000) (3) Data frame sent\nI0828 04:13:47.630300 1740 log.go:172] (0x4000ab2bb0) Data frame received for 3\nI0828 04:13:47.630402 1740 log.go:172] (0x4000828000) (3) Data frame handling\nI0828 04:13:47.630734 1740 log.go:172] (0x4000ab2bb0) Data frame received for 5\nI0828 04:13:47.630894 1740 log.go:172] (0x400085c000) (5) Data frame handling\nI0828 04:13:47.631460 1740 log.go:172] (0x4000ab2bb0) Data frame received for 1\nI0828 04:13:47.631575 1740 log.go:172] (0x400074e1e0) (1) Data frame handling\nI0828 04:13:47.631697 1740 log.go:172] (0x400074e1e0) (1) Data frame sent\nI0828 04:13:47.633172 1740 log.go:172] (0x4000ab2bb0) (0x400074e1e0) Stream removed, broadcasting: 1\nI0828 04:13:47.638287 1740 log.go:172] (0x4000ab2bb0) Go away received\nI0828 04:13:47.651257 1740 log.go:172] (0x4000ab2bb0) (0x400074e1e0) Stream removed, broadcasting: 1\nI0828 04:13:47.651705 1740 log.go:172] (0x4000ab2bb0) (0x4000828000) Stream removed, broadcasting: 3\nI0828 04:13:47.652643 1740 log.go:172] (0x4000ab2bb0) (0x400085c000) Stream removed, broadcasting: 5\n" Aug 28 04:13:47.667: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 04:13:47.667: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 04:13:47.667: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 04:13:47.672: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 28 04:13:57.686: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 28 04:13:57.686: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 28 04:13:57.687: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 28 04:13:57.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999994098s Aug 28 04:13:58.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992696797s Aug 28 04:13:59.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.945958988s Aug 28 04:14:00.772: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.93649263s Aug 28 04:14:01.779: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.926287811s Aug 28 04:14:02.788: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.91920442s Aug 28 04:14:03.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.909854777s Aug 28 04:14:04.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.901079536s Aug 28 04:14:05.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.884790908s Aug 28 04:14:06.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 875.038888ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6844 Aug 28 04:14:07.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:14:09.565: INFO: stderr: "I0828 04:14:09.427011 1763 log.go:172] (0x4000a70bb0) (0x400065a1e0) Create stream\nI0828 04:14:09.432367 1763 log.go:172] (0x4000a70bb0) (0x400065a1e0) Stream added, broadcasting: 1\nI0828 04:14:09.446872 1763 log.go:172] (0x4000a70bb0) Reply frame received for 1\nI0828 04:14:09.447448 1763 log.go:172] (0x4000a70bb0) (0x400065a280) Create stream\nI0828 04:14:09.447510 1763 log.go:172] (0x4000a70bb0) (0x400065a280) Stream added, broadcasting: 3\nI0828 04:14:09.448846 1763 log.go:172] (0x4000a70bb0) Reply frame received for 3\nI0828 04:14:09.449107 1763 log.go:172] (0x4000a70bb0) (0x40004fd4a0) Create stream\nI0828 04:14:09.449185 1763 log.go:172] (0x4000a70bb0) (0x40004fd4a0) Stream added, broadcasting: 5\nI0828 04:14:09.450327 1763 log.go:172] (0x4000a70bb0) Reply frame received for 5\nI0828 04:14:09.547368 1763 log.go:172] (0x4000a70bb0) Data frame received for 3\nI0828 04:14:09.547630 1763 log.go:172] (0x4000a70bb0) Data frame received for 5\nI0828 04:14:09.547848 1763 log.go:172] (0x40004fd4a0) (5) Data frame handling\nI0828 04:14:09.548095 1763 log.go:172] (0x400065a280) (3) Data frame handling\nI0828 04:14:09.548285 1763 log.go:172] (0x4000a70bb0) Data frame received for 1\nI0828 04:14:09.548362 1763 log.go:172] (0x400065a1e0) (1) Data frame handling\nI0828 04:14:09.549196 1763 log.go:172] (0x400065a280) (3) Data frame sent\nI0828 04:14:09.549269 1763 log.go:172] (0x400065a1e0) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 04:14:09.549350 1763 log.go:172] (0x40004fd4a0) (5) Data frame sent\nI0828 04:14:09.549775 1763 log.go:172] (0x4000a70bb0) Data frame received for 5\nI0828 04:14:09.549831 1763 log.go:172] (0x40004fd4a0) (5) Data frame handling\nI0828 04:14:09.549986 1763 log.go:172] (0x4000a70bb0) Data frame received for 3\nI0828 04:14:09.550064 1763 log.go:172] (0x400065a280) (3) Data frame handling\nI0828 04:14:09.551235 1763 log.go:172] (0x4000a70bb0) (0x400065a1e0) Stream removed, broadcasting: 1\nI0828 04:14:09.552048 1763 log.go:172] (0x4000a70bb0) Go away received\nI0828 04:14:09.554303 1763 log.go:172] (0x4000a70bb0) (0x400065a1e0) Stream removed, broadcasting: 1\nI0828 04:14:09.554507 1763 log.go:172] (0x4000a70bb0) (0x400065a280) Stream removed, broadcasting: 3\nI0828 04:14:09.554646 1763 log.go:172] (0x4000a70bb0) (0x40004fd4a0) Stream removed, broadcasting: 5\n" Aug 28 04:14:09.565: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 28 04:14:09.565: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 28 04:14:09.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:14:11.026: INFO: stderr: "I0828 04:14:10.921211 1786 log.go:172] (0x4000b1eb00) (0x4000a02000) Create stream\nI0828 04:14:10.924562 1786 log.go:172] (0x4000b1eb00) (0x4000a02000) Stream added, broadcasting: 1\nI0828 04:14:10.938094 1786 log.go:172] (0x4000b1eb00) Reply frame received for 1\nI0828 04:14:10.938662 1786 log.go:172] (0x4000b1eb00) (0x40007ebb80) Create stream\nI0828 04:14:10.938720 1786 log.go:172] (0x4000b1eb00) (0x40007ebb80) Stream added, broadcasting: 3\nI0828 04:14:10.940124 1786 log.go:172] (0x4000b1eb00) Reply frame received for 3\nI0828 04:14:10.940453 1786 log.go:172] (0x4000b1eb00) (0x4000a020a0) Create stream\nI0828 04:14:10.940550 1786 log.go:172] (0x4000b1eb00) (0x4000a020a0) Stream added, broadcasting: 5\nI0828 04:14:10.942101 1786 log.go:172] (0x4000b1eb00) Reply frame received for 5\nI0828 04:14:11.000696 1786 log.go:172] (0x4000b1eb00) Data frame received for 3\nI0828 04:14:11.001207 1786 log.go:172] (0x4000b1eb00) Data frame received for 5\nI0828 04:14:11.001496 1786 log.go:172] (0x4000b1eb00) Data frame received for 1\nI0828 04:14:11.001652 1786 log.go:172] (0x40007ebb80) (3) Data frame handling\nI0828 04:14:11.001775 1786 log.go:172] (0x4000a02000) (1) Data frame handling\nI0828 04:14:11.001858 1786 log.go:172] (0x4000a020a0) (5) Data frame handling\nI0828 04:14:11.003666 1786 log.go:172] (0x40007ebb80) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 04:14:11.003796 1786 log.go:172] (0x4000a020a0) (5) Data frame sent\nI0828 04:14:11.004007 1786 log.go:172] (0x4000a02000) (1) Data frame sent\nI0828 04:14:11.004098 1786 log.go:172] (0x4000b1eb00) Data frame received for 5\nI0828 04:14:11.004182 1786 log.go:172] (0x4000a020a0) (5) Data frame handling\nI0828 04:14:11.004375 1786 log.go:172] (0x4000b1eb00) Data frame received for 3\nI0828 04:14:11.005464 1786 log.go:172] (0x4000b1eb00) (0x4000a02000) Stream removed, broadcasting: 1\nI0828 04:14:11.005876 1786 log.go:172] (0x40007ebb80) (3) Data frame handling\nI0828 04:14:11.007844 1786 log.go:172] (0x4000b1eb00) Go away received\nI0828 04:14:11.012518 1786 log.go:172] (0x4000b1eb00) (0x4000a02000) Stream removed, broadcasting: 1\nI0828 04:14:11.012965 1786 log.go:172] (0x4000b1eb00) (0x40007ebb80) Stream removed, broadcasting: 3\nI0828 04:14:11.013208 1786 log.go:172] (0x4000b1eb00) (0x4000a020a0) Stream removed, broadcasting: 5\n" Aug 28 04:14:11.027: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 28 04:14:11.027: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 28 04:14:11.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:14:12.420: INFO: rc: 1 Aug 28 04:14:12.422: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: I0828 04:14:12.369012 1810 log.go:172] (0x4000a34dc0) (0x4000aa4280) Create stream I0828 04:14:12.372302 1810 log.go:172] (0x4000a34dc0) (0x4000aa4280) Stream added, broadcasting: 1 I0828 04:14:12.389773 1810 log.go:172] (0x4000a34dc0) Reply frame received for 1 I0828 04:14:12.390594 1810 log.go:172] (0x4000a34dc0) (0x40005a94a0) Create stream I0828 04:14:12.390698 1810 log.go:172] (0x4000a34dc0) (0x40005a94a0) Stream added, broadcasting: 3 I0828 04:14:12.392115 1810 log.go:172] (0x4000a34dc0) Reply frame received for 3 I0828 04:14:12.392343 1810 log.go:172] (0x4000a34dc0) (0x4000aa40a0) Create stream I0828 04:14:12.392401 1810 log.go:172] (0x4000a34dc0) (0x4000aa40a0) Stream added, broadcasting: 5 I0828 04:14:12.393679 1810 log.go:172] (0x4000a34dc0) Reply frame received for 5 I0828 04:14:12.399991 1810 log.go:172] (0x4000a34dc0) Data frame received for 1 I0828 04:14:12.400254 1810 log.go:172] (0x4000aa4280) (1) Data frame handling I0828 04:14:12.401764 1810 log.go:172] (0x4000aa4280) (1) Data frame sent I0828 04:14:12.402482 1810 log.go:172] (0x4000a34dc0) (0x40005a94a0) Stream removed, broadcasting: 3 I0828 04:14:12.405939 1810 log.go:172] (0x4000a34dc0) (0x4000aa4280) Stream removed, broadcasting: 1 I0828 04:14:12.406443 1810 log.go:172] (0x4000a34dc0) (0x4000aa40a0) Stream removed, broadcasting: 5 I0828 04:14:12.406934 1810 log.go:172] (0x4000a34dc0) Go away received I0828 04:14:12.410314 1810 log.go:172] (0x4000a34dc0) (0x4000aa4280) Stream removed, broadcasting: 1 I0828 04:14:12.410696 1810 log.go:172] (0x4000a34dc0) (0x40005a94a0) Stream removed, broadcasting: 3 I0828 04:14:12.410781 1810 log.go:172] (0x4000a34dc0) (0x4000aa40a0) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "a2062554fd246d78462229da071b96c9fa554b0e313a445b06940f0c0ac9d104": task a53d205dad95d8952a435f2656e6b29144ce91e27405d02191908d931bf33d84 not found: not found error: exit status 1 Aug 28 04:14:22.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:14:23.695: INFO: rc: 1 Aug 28 04:14:23.695: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:14:33.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:14:34.940: INFO: rc: 1 Aug 28 04:14:34.940: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:14:44.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:14:46.180: INFO: rc: 1 Aug 28 04:14:46.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:14:56.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:14:57.469: INFO: rc: 1 Aug 28 04:14:57.470: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:15:07.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:15:08.791: INFO: rc: 1 Aug 28 04:15:08.791: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:15:18.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:15:20.063: INFO: rc: 1 Aug 28 04:15:20.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:15:30.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:15:31.358: INFO: rc: 1 Aug 28 04:15:31.358: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:15:41.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:15:42.621: INFO: rc: 1 Aug 28 04:15:42.621: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:15:52.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:15:53.880: INFO: rc: 1 Aug 28 04:15:53.880: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:16:03.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:16:05.146: INFO: rc: 1 Aug 28 04:16:05.146: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:16:15.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:16:16.470: INFO: rc: 1 Aug 28 04:16:16.471: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:16:26.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:16:27.741: INFO: rc: 1 Aug 28 04:16:27.742: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:16:37.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:16:38.969: INFO: rc: 1 Aug 28 04:16:38.970: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:16:48.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:16:50.227: INFO: rc: 1 Aug 28 04:16:50.227: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:17:00.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:17:04.303: INFO: rc: 1 Aug 28 04:17:04.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:17:14.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:17:15.556: INFO: rc: 1 Aug 28 04:17:15.556: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:17:25.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:17:26.827: INFO: rc: 1 Aug 28 04:17:26.828: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:17:36.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:17:38.102: INFO: rc: 1 Aug 28 04:17:38.102: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:17:48.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:17:49.354: INFO: rc: 1 Aug 28 04:17:49.354: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:17:59.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:18:00.664: INFO: rc: 1 Aug 28 04:18:00.665: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:18:10.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:18:11.893: INFO: rc: 1 Aug 28 04:18:11.894: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:18:21.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:18:23.167: INFO: rc: 1 Aug 28 04:18:23.167: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:18:33.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:18:34.429: INFO: rc: 1 Aug 28 04:18:34.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:18:44.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:18:45.663: INFO: rc: 1 Aug 28 04:18:45.663: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:18:55.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:18:56.946: INFO: rc: 1 Aug 28 04:18:56.946: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:19:06.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:19:08.193: INFO: rc: 1 Aug 28 04:19:08.194: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 28 04:19:18.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6844 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:19:19.458: INFO: rc: 1 Aug 28 04:19:19.459: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Aug 28 04:19:19.459: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 28 04:19:19.481: INFO: Deleting all statefulset in ns statefulset-6844 Aug 28 04:19:19.485: INFO: Scaling statefulset ss to 0 Aug 28 04:19:19.497: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 04:19:19.501: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:19:19.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6844" for this suite. • [SLOW TEST:380.023 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":82,"skipped":1355,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:19:19.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 28 04:19:24.820: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:19:24.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7166" for this suite. • [SLOW TEST:5.412 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":83,"skipped":1358,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:19:24.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 28 04:19:25.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7932' Aug 28 04:19:26.623: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 28 04:19:26.624: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Aug 28 04:19:26.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7932' Aug 28 04:19:28.128: INFO: stderr: "" Aug 28 04:19:28.128: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:19:28.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7932" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":84,"skipped":1360,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:19:28.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:19:29.182: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5" in namespace "downward-api-3441" to be "success or failure" Aug 28 04:19:29.204: INFO: Pod "downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.200226ms Aug 28 04:19:31.274: INFO: Pod "downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092130229s Aug 28 04:19:33.281: INFO: Pod "downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5": Phase="Running", Reason="", readiness=true. Elapsed: 4.099203526s Aug 28 04:19:35.287: INFO: Pod "downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10522471s STEP: Saw pod success Aug 28 04:19:35.288: INFO: Pod "downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5" satisfied condition "success or failure" Aug 28 04:19:35.293: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5 container client-container: STEP: delete the pod Aug 28 04:19:35.368: INFO: Waiting for pod downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5 to disappear Aug 28 04:19:35.411: INFO: Pod downwardapi-volume-fef73da9-f07e-40dc-abe9-fcffdc753bc5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:19:35.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3441" for this suite. • [SLOW TEST:6.441 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1361,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:19:35.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-12604b47-5818-4d5b-a96c-420cf75eeda8 STEP: Creating a pod to test consume configMaps Aug 28 04:19:35.556: INFO: Waiting up to 5m0s for pod "pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8" in namespace "configmap-2353" to be "success or failure" Aug 28 04:19:35.561: INFO: Pod "pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.385374ms Aug 28 04:19:37.568: INFO: Pod "pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011478555s Aug 28 04:19:39.575: INFO: Pod "pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8": Phase="Running", Reason="", readiness=true. Elapsed: 4.018675574s Aug 28 04:19:41.607: INFO: Pod "pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049932572s STEP: Saw pod success Aug 28 04:19:41.607: INFO: Pod "pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8" satisfied condition "success or failure" Aug 28 04:19:41.636: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8 container configmap-volume-test: STEP: delete the pod Aug 28 04:19:41.727: INFO: Waiting for pod pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8 to disappear Aug 28 04:19:41.743: INFO: Pod pod-configmaps-0988eee6-5480-4629-8067-f42bc73180d8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:19:41.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2353" for this suite. • [SLOW TEST:6.330 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1411,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:19:41.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:19:41.847: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:19:45.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3105" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:19:45.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 28 04:19:46.087: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 28 04:19:51.114: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:19:51.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2858" for this suite. • [SLOW TEST:5.278 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":88,"skipped":1459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:19:51.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:19:54.678: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:19:56.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185194, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185194, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185194, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185194, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 04:19:58.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185194, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185194, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185194, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185194, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:20:01.915: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:20:01.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1014" for this suite. STEP: Destroying namespace "webhook-1014-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.817 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":89,"skipped":1499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:20:02.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 28 04:20:02.511: INFO: PodSpec: initContainers in spec.initContainers Aug 28 04:20:50.128: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ceaeeb9e-6952-4921-aad2-42b029d2af37", GenerateName:"", Namespace:"init-container-3914", SelfLink:"/api/v1/namespaces/init-container-3914/pods/pod-init-ceaeeb9e-6952-4921-aad2-42b029d2af37", UID:"13d855a6-16d7-4d1d-a930-572e7de82341", ResourceVersion:"4484963", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734185202, loc:(*time.Location)(0x726af60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"510049914"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gwh4q", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x40041a6980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gwh4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gwh4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gwh4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4003a9c548), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40050aa960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4003a9c5d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4003a9c5f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4003a9c5f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4003a9c5fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185203, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185203, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185203, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185202, loc:(*time.Location)(0x726af60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.1.249", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.249"}}, StartTime:(*v1.Time)(0x4003257c40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x4002e2f0a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x4002e2f110)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1d230c1e52cf77198ae4fb9f712ec0b48adf6bd7615c25441bec8a2d858db671", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4003257c80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4003257c60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0x4003a9c67f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:20:50.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3914" for this suite. • [SLOW TEST:48.158 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":90,"skipped":1534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:20:50.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 28 04:20:50.332: INFO: Waiting up to 5m0s for pod "pod-31deb525-3893-4218-a591-bb5106de134d" in namespace "emptydir-9377" to be "success or failure" Aug 28 04:20:50.368: INFO: Pod "pod-31deb525-3893-4218-a591-bb5106de134d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.991986ms Aug 28 04:20:52.375: INFO: Pod "pod-31deb525-3893-4218-a591-bb5106de134d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042737151s Aug 28 04:20:54.383: INFO: Pod "pod-31deb525-3893-4218-a591-bb5106de134d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050448163s STEP: Saw pod success Aug 28 04:20:54.383: INFO: Pod "pod-31deb525-3893-4218-a591-bb5106de134d" satisfied condition "success or failure" Aug 28 04:20:54.387: INFO: Trying to get logs from node jerma-worker pod pod-31deb525-3893-4218-a591-bb5106de134d container test-container: STEP: delete the pod Aug 28 04:20:54.463: INFO: Waiting for pod pod-31deb525-3893-4218-a591-bb5106de134d to disappear Aug 28 04:20:54.484: INFO: Pod pod-31deb525-3893-4218-a591-bb5106de134d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:20:54.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9377" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:20:54.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:20:54.651: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 28 04:21:04.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7182 create -f -' Aug 28 04:21:09.336: INFO: stderr: "" Aug 28 04:21:09.336: INFO: stdout: "e2e-test-crd-publish-openapi-807-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 28 04:21:09.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7182 delete e2e-test-crd-publish-openapi-807-crds test-cr' Aug 28 04:21:10.560: INFO: stderr: "" Aug 28 04:21:10.561: INFO: stdout: "e2e-test-crd-publish-openapi-807-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 28 04:21:10.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7182 apply -f -' Aug 28 04:21:12.205: INFO: stderr: "" Aug 28 04:21:12.206: INFO: stdout: "e2e-test-crd-publish-openapi-807-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 28 04:21:12.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7182 delete e2e-test-crd-publish-openapi-807-crds test-cr' Aug 28 04:21:13.416: INFO: stderr: "" Aug 28 04:21:13.416: INFO: stdout: "e2e-test-crd-publish-openapi-807-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 28 04:21:13.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-807-crds' Aug 28 04:21:15.043: INFO: stderr: "" Aug 28 04:21:15.043: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-807-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:21:34.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7182" for this suite. • [SLOW TEST:40.106 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":92,"skipped":1607,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:21:34.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Aug 28 04:21:34.827: INFO: Waiting up to 5m0s for pod "var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5" in namespace "var-expansion-3766" to be "success or failure" Aug 28 04:21:34.834: INFO: Pod "var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.619567ms Aug 28 04:21:36.866: INFO: Pod "var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039702298s Aug 28 04:21:38.874: INFO: Pod "var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5": Phase="Running", Reason="", readiness=true. Elapsed: 4.047196812s Aug 28 04:21:40.881: INFO: Pod "var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054247793s STEP: Saw pod success Aug 28 04:21:40.881: INFO: Pod "var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5" satisfied condition "success or failure" Aug 28 04:21:40.887: INFO: Trying to get logs from node jerma-worker pod var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5 container dapi-container: STEP: delete the pod Aug 28 04:21:40.924: INFO: Waiting for pod var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5 to disappear Aug 28 04:21:40.944: INFO: Pod var-expansion-d00e1682-fee7-4c34-b4bd-413181ffeda5 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:21:40.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3766" for this suite. • [SLOW TEST:6.264 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:21:40.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:21:41.018: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 28 04:21:41.037: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 28 04:21:46.062: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 28 04:21:46.063: INFO: Creating deployment "test-rolling-update-deployment" Aug 28 04:21:46.076: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 28 04:21:46.132: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 28 04:21:48.170: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 28 04:21:48.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185306, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185306, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185306, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185306, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 04:21:50.182: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 28 04:21:50.199: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1897 /apis/apps/v1/namespaces/deployment-1897/deployments/test-rolling-update-deployment 9bdf6bf4-345d-43d3-8d9c-5bd46d7421d4 4485263 1 2020-08-28 04:21:46 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4006043d68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-28 04:21:46 +0000 UTC,LastTransitionTime:2020-08-28 04:21:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-28 04:21:49 +0000 UTC,LastTransitionTime:2020-08-28 04:21:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 28 04:21:50.204: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-1897 /apis/apps/v1/namespaces/deployment-1897/replicasets/test-rolling-update-deployment-67cf4f6444 0a611cd0-f2a4-4990-9d8f-e53e8dbe68b7 4485252 1 2020-08-28 04:21:46 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 9bdf6bf4-345d-43d3-8d9c-5bd46d7421d4 0x40060ba287 0x40060ba288}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40060ba2f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 28 04:21:50.205: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 28 04:21:50.205: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1897 /apis/apps/v1/namespaces/deployment-1897/replicasets/test-rolling-update-controller 2db99a79-b641-499f-a100-98f11c359ca1 4485261 2 2020-08-28 04:21:41 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 9bdf6bf4-345d-43d3-8d9c-5bd46d7421d4 0x40060ba18f 0x40060ba1b0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x40060ba218 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 28 04:21:50.210: INFO: Pod "test-rolling-update-deployment-67cf4f6444-8z4t5" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-8z4t5 test-rolling-update-deployment-67cf4f6444- deployment-1897 /api/v1/namespaces/deployment-1897/pods/test-rolling-update-deployment-67cf4f6444-8z4t5 448a026b-0779-4a56-b0de-5b7f7a623acb 4485251 0 2020-08-28 04:21:46 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 0a611cd0-f2a4-4990-9d8f-e53e8dbe68b7 0x4006071587 0x4006071588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gzzkm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gzzkm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gzzkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:21:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:21:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:21:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:21:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.250,StartTime:2020-08-28 04:21:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 04:21:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://731dd55390ed27cdfaa1d19b86a741d7a9f52c0cd2dec2a71607985556b5077a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:21:50.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1897" for this suite. • [SLOW TEST:9.261 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":94,"skipped":1645,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:21:50.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-3102a8b1-db27-4c0b-8e01-a44b11efe1ca STEP: Creating a pod to test consume secrets Aug 28 04:21:50.372: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-86cd195d-bd18-45f0-85cf-5beb037ccfab" in namespace "projected-9259" to be "success or failure" Aug 28 04:21:50.446: INFO: Pod "pod-projected-secrets-86cd195d-bd18-45f0-85cf-5beb037ccfab": Phase="Pending", Reason="", readiness=false. Elapsed: 74.27281ms Aug 28 04:21:52.453: INFO: Pod "pod-projected-secrets-86cd195d-bd18-45f0-85cf-5beb037ccfab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0808739s Aug 28 04:21:54.459: INFO: Pod "pod-projected-secrets-86cd195d-bd18-45f0-85cf-5beb037ccfab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087100223s STEP: Saw pod success Aug 28 04:21:54.459: INFO: Pod "pod-projected-secrets-86cd195d-bd18-45f0-85cf-5beb037ccfab" satisfied condition "success or failure" Aug 28 04:21:54.463: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-86cd195d-bd18-45f0-85cf-5beb037ccfab container projected-secret-volume-test: STEP: delete the pod Aug 28 04:21:54.501: INFO: Waiting for pod pod-projected-secrets-86cd195d-bd18-45f0-85cf-5beb037ccfab to disappear Aug 28 04:21:54.661: INFO: Pod pod-projected-secrets-86cd195d-bd18-45f0-85cf-5beb037ccfab no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:21:54.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9259" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1664,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:21:54.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-82437114-de9f-4912-bc24-3266dab3bc1a STEP: Creating a pod to test consume secrets Aug 28 04:21:54.821: INFO: Waiting up to 5m0s for pod "pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c" in namespace "secrets-7507" to be "success or failure" Aug 28 04:21:54.842: INFO: Pod "pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.658316ms Aug 28 04:21:56.847: INFO: Pod "pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026568235s Aug 28 04:21:58.854: INFO: Pod "pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c": Phase="Running", Reason="", readiness=true. Elapsed: 4.033067235s Aug 28 04:22:00.860: INFO: Pod "pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038997679s STEP: Saw pod success Aug 28 04:22:00.860: INFO: Pod "pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c" satisfied condition "success or failure" Aug 28 04:22:00.864: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c container secret-volume-test: STEP: delete the pod Aug 28 04:22:00.905: INFO: Waiting for pod pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c to disappear Aug 28 04:22:00.918: INFO: Pod pod-secrets-1c21f396-7385-4a9a-8e1f-1269b3d2c94c no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:22:00.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7507" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1664,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:22:00.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 28 04:22:01.010: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 04:22:01.038: INFO: Waiting for terminating namespaces to be deleted... Aug 28 04:22:01.042: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 28 04:22:01.055: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 28 04:22:01.055: INFO: Container kube-proxy ready: true, restart count 0 Aug 28 04:22:01.055: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 28 04:22:01.055: INFO: Container kindnet-cni ready: true, restart count 0 Aug 28 04:22:01.055: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 28 04:22:01.055: INFO: Container app ready: true, restart count 0 Aug 28 04:22:01.055: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 28 04:22:01.097: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 28 04:22:01.099: INFO: Container kube-proxy ready: true, restart count 0 Aug 28 04:22:01.099: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded) Aug 28 04:22:01.099: INFO: Container httpd ready: true, restart count 0 Aug 28 04:22:01.099: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 28 04:22:01.099: INFO: Container kindnet-cni ready: true, restart count 0 Aug 28 04:22:01.099: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 28 04:22:01.099: INFO: Container app ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-298942a1-5a61-48ed-b97e-1e5c47ba4e2e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-298942a1-5a61-48ed-b97e-1e5c47ba4e2e off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-298942a1-5a61-48ed-b97e-1e5c47ba4e2e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:27:09.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6691" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.624 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":97,"skipped":1674,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:27:09.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:27:10.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855" in namespace "downward-api-1943" to be "success or failure" Aug 28 04:27:10.497: INFO: Pod "downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855": Phase="Pending", Reason="", readiness=false. Elapsed: 362.048855ms Aug 28 04:27:12.593: INFO: Pod "downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457343162s Aug 28 04:27:14.620: INFO: Pod "downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484838246s Aug 28 04:27:16.626: INFO: Pod "downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.490823665s STEP: Saw pod success Aug 28 04:27:16.626: INFO: Pod "downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855" satisfied condition "success or failure" Aug 28 04:27:16.649: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855 container client-container: STEP: delete the pod Aug 28 04:27:16.805: INFO: Waiting for pod downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855 to disappear Aug 28 04:27:17.245: INFO: Pod downwardapi-volume-3455740d-5414-4bb1-bebb-70e965aca855 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:27:17.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1943" for this suite. • [SLOW TEST:7.697 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:27:17.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 28 04:27:17.590: INFO: Waiting up to 5m0s for pod "downward-api-e8343fa7-2b01-44ca-9d40-afdc5eb91d03" in namespace "downward-api-7309" to be "success or failure" Aug 28 04:27:17.613: INFO: Pod "downward-api-e8343fa7-2b01-44ca-9d40-afdc5eb91d03": Phase="Pending", Reason="", readiness=false. Elapsed: 22.747675ms Aug 28 04:27:19.620: INFO: Pod "downward-api-e8343fa7-2b01-44ca-9d40-afdc5eb91d03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029131305s Aug 28 04:27:21.632: INFO: Pod "downward-api-e8343fa7-2b01-44ca-9d40-afdc5eb91d03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041187425s STEP: Saw pod success Aug 28 04:27:21.632: INFO: Pod "downward-api-e8343fa7-2b01-44ca-9d40-afdc5eb91d03" satisfied condition "success or failure" Aug 28 04:27:21.637: INFO: Trying to get logs from node jerma-worker2 pod downward-api-e8343fa7-2b01-44ca-9d40-afdc5eb91d03 container dapi-container: STEP: delete the pod Aug 28 04:27:21.714: INFO: Waiting for pod downward-api-e8343fa7-2b01-44ca-9d40-afdc5eb91d03 to disappear Aug 28 04:27:21.749: INFO: Pod downward-api-e8343fa7-2b01-44ca-9d40-afdc5eb91d03 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:27:21.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7309" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1714,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:27:21.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:27:25.817: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:27:27.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185645, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185645, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185645, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185645, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 04:27:29.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185645, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185645, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185645, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734185645, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:27:32.894: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:27:33.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7948" for this suite. STEP: Destroying namespace "webhook-7948-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.536 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":100,"skipped":1721,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:27:33.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Aug 28 04:27:37.958: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6416 pod-service-account-c6dbd030-c0b5-4797-8422-6a29b9aba678 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 28 04:27:39.436: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6416 pod-service-account-c6dbd030-c0b5-4797-8422-6a29b9aba678 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 28 04:27:40.946: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6416 pod-service-account-c6dbd030-c0b5-4797-8422-6a29b9aba678 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:27:42.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6416" for this suite. • [SLOW TEST:9.153 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":101,"skipped":1723,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:27:42.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5982/secret-test-14a8dd4e-263d-4e73-b6ca-6d31113b1aae STEP: Creating a pod to test consume secrets Aug 28 04:27:42.682: INFO: Waiting up to 5m0s for pod "pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054" in namespace "secrets-5982" to be "success or failure" Aug 28 04:27:42.717: INFO: Pod "pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054": Phase="Pending", Reason="", readiness=false. Elapsed: 35.278871ms Aug 28 04:27:45.083: INFO: Pod "pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401465529s Aug 28 04:27:47.089: INFO: Pod "pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054": Phase="Running", Reason="", readiness=true. Elapsed: 4.407315249s Aug 28 04:27:49.096: INFO: Pod "pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.413785697s STEP: Saw pod success Aug 28 04:27:49.096: INFO: Pod "pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054" satisfied condition "success or failure" Aug 28 04:27:49.129: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054 container env-test: STEP: delete the pod Aug 28 04:27:49.154: INFO: Waiting for pod pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054 to disappear Aug 28 04:27:49.158: INFO: Pod pod-configmaps-56ecc806-fef3-43ef-832a-fff0c2304054 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:27:49.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5982" for this suite. • [SLOW TEST:6.718 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1730,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:27:49.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 28 04:27:57.034: INFO: Successfully updated pod "annotationupdatef7ce5993-b70c-4939-a9f5-90df8c346ac9" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:27:59.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5145" for this suite. • [SLOW TEST:10.596 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1749,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:27:59.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Aug 28 04:28:00.441: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:28:01.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8502" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":104,"skipped":1753,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:28:01.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 28 04:28:02.897: INFO: >>> kubeConfig: /root/.kube/config Aug 28 04:28:22.716: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:29:40.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3093" for this suite. • [SLOW TEST:98.989 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":105,"skipped":1762,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:29:40.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-4a94b551-ba14-4c04-afbf-e782e27dcb9d STEP: Creating a pod to test consume secrets Aug 28 04:29:40.863: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be" in namespace "projected-814" to be "success or failure" Aug 28 04:29:40.927: INFO: Pod "pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be": Phase="Pending", Reason="", readiness=false. Elapsed: 64.504694ms Aug 28 04:29:42.933: INFO: Pod "pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070068351s Aug 28 04:29:44.971: INFO: Pod "pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107538994s Aug 28 04:29:46.978: INFO: Pod "pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114997904s STEP: Saw pod success Aug 28 04:29:46.978: INFO: Pod "pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be" satisfied condition "success or failure" Aug 28 04:29:46.983: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be container projected-secret-volume-test: STEP: delete the pod Aug 28 04:29:47.080: INFO: Waiting for pod pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be to disappear Aug 28 04:29:47.102: INFO: Pod pod-projected-secrets-d2fc352c-0e00-4660-978a-d69c65b521be no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:29:47.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-814" for this suite. • [SLOW TEST:6.339 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1790,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:29:47.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4861 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-4861 Aug 28 04:29:47.543: INFO: Found 0 stateful pods, waiting for 1 Aug 28 04:29:57.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 28 04:29:58.192: INFO: Deleting all statefulset in ns statefulset-4861 Aug 28 04:29:58.285: INFO: Scaling statefulset ss to 0 Aug 28 04:30:09.372: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 04:30:09.425: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:30:09.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4861" for this suite. • [SLOW TEST:22.522 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":107,"skipped":1828,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:30:09.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Aug 28 04:30:10.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2091' Aug 28 04:30:12.731: INFO: stderr: "" Aug 28 04:30:12.731: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 28 04:30:12.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2091' Aug 28 04:30:14.155: INFO: stderr: "" Aug 28 04:30:14.155: INFO: stdout: "update-demo-nautilus-tsh2z update-demo-nautilus-x2f4f " Aug 28 04:30:14.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsh2z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2091' Aug 28 04:30:15.536: INFO: stderr: "" Aug 28 04:30:15.536: INFO: stdout: "" Aug 28 04:30:15.536: INFO: update-demo-nautilus-tsh2z is created but not running Aug 28 04:30:20.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2091' Aug 28 04:30:21.861: INFO: stderr: "" Aug 28 04:30:21.861: INFO: stdout: "update-demo-nautilus-tsh2z update-demo-nautilus-x2f4f " Aug 28 04:30:21.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsh2z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2091' Aug 28 04:30:23.245: INFO: stderr: "" Aug 28 04:30:23.246: INFO: stdout: "true" Aug 28 04:30:23.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsh2z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2091' Aug 28 04:30:24.754: INFO: stderr: "" Aug 28 04:30:24.755: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 04:30:24.755: INFO: validating pod update-demo-nautilus-tsh2z Aug 28 04:30:24.760: INFO: got data: { "image": "nautilus.jpg" } Aug 28 04:30:24.760: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 04:30:24.760: INFO: update-demo-nautilus-tsh2z is verified up and running Aug 28 04:30:24.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2f4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2091' Aug 28 04:30:26.153: INFO: stderr: "" Aug 28 04:30:26.153: INFO: stdout: "true" Aug 28 04:30:26.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2f4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2091' Aug 28 04:30:27.792: INFO: stderr: "" Aug 28 04:30:27.792: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 04:30:27.792: INFO: validating pod update-demo-nautilus-x2f4f Aug 28 04:30:28.073: INFO: got data: { "image": "nautilus.jpg" } Aug 28 04:30:28.074: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 04:30:28.074: INFO: update-demo-nautilus-x2f4f is verified up and running STEP: using delete to clean up resources Aug 28 04:30:28.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2091' Aug 28 04:30:29.683: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 04:30:29.683: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 28 04:30:29.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2091' Aug 28 04:30:30.998: INFO: stderr: "No resources found in kubectl-2091 namespace.\n" Aug 28 04:30:30.998: INFO: stdout: "" Aug 28 04:30:30.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2091 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 28 04:30:32.360: INFO: stderr: "" Aug 28 04:30:32.361: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:30:32.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2091" for this suite. • [SLOW TEST:22.731 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":108,"skipped":1844,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:30:32.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629 [It] should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 28 04:30:32.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8847' Aug 28 04:30:34.249: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 28 04:30:34.249: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 Aug 28 04:30:36.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8847' Aug 28 04:30:38.334: INFO: stderr: "" Aug 28 04:30:38.334: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:30:38.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8847" for this suite. • [SLOW TEST:6.249 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1625 should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":109,"skipped":1858,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:30:38.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:30:53.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6364" for this suite. • [SLOW TEST:15.101 seconds] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:30:53.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 28 04:30:53.969: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:31:02.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4707" for this suite. • [SLOW TEST:8.504 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":111,"skipped":1913,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:31:02.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 28 04:31:02.513: INFO: Waiting up to 5m0s for pod "pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11" in namespace "emptydir-8680" to be "success or failure" Aug 28 04:31:02.529: INFO: Pod "pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11": Phase="Pending", Reason="", readiness=false. Elapsed: 16.418375ms Aug 28 04:31:04.805: INFO: Pod "pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291844015s Aug 28 04:31:06.812: INFO: Pod "pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11": Phase="Running", Reason="", readiness=true. Elapsed: 4.29853986s Aug 28 04:31:08.820: INFO: Pod "pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.306533691s STEP: Saw pod success Aug 28 04:31:08.820: INFO: Pod "pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11" satisfied condition "success or failure" Aug 28 04:31:08.857: INFO: Trying to get logs from node jerma-worker pod pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11 container test-container: STEP: delete the pod Aug 28 04:31:08.883: INFO: Waiting for pod pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11 to disappear Aug 28 04:31:08.905: INFO: Pod pod-7ef7b057-505b-4c9e-b755-2ea815ae3a11 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:31:08.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8680" for this suite. • [SLOW TEST:6.681 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1939,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:31:08.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b5b52f67-d0ee-486e-a945-24442f1ee111 STEP: Creating a pod to test consume secrets Aug 28 04:31:09.119: INFO: Waiting up to 5m0s for pod "pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3" in namespace "secrets-6794" to be "success or failure" Aug 28 04:31:09.140: INFO: Pod "pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.999873ms Aug 28 04:31:11.146: INFO: Pod "pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02706968s Aug 28 04:31:13.482: INFO: Pod "pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362920365s Aug 28 04:31:15.489: INFO: Pod "pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.369663724s STEP: Saw pod success Aug 28 04:31:15.489: INFO: Pod "pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3" satisfied condition "success or failure" Aug 28 04:31:15.492: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3 container secret-env-test: STEP: delete the pod Aug 28 04:31:15.686: INFO: Waiting for pod pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3 to disappear Aug 28 04:31:15.869: INFO: Pod pod-secrets-b130f8fd-3e7c-40d4-bd09-3b7c18c2a5a3 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:31:15.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6794" for this suite. • [SLOW TEST:6.956 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1958,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:31:15.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 28 04:31:16.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2224' Aug 28 04:31:20.784: INFO: stderr: "" Aug 28 04:31:20.784: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 28 04:31:25.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2224 -o json' Aug 28 04:31:27.072: INFO: stderr: "" Aug 28 04:31:27.072: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-28T04:31:20Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2224\",\n \"resourceVersion\": \"4487541\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2224/pods/e2e-test-httpd-pod\",\n \"uid\": \"9a128fb0-fbe0-49c5-9b9c-eef506f445a4\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bwgbz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bwgbz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bwgbz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-28T04:31:20Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-28T04:31:24Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-28T04:31:24Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-28T04:31:20Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6a1c347736a690672a7dca5c0885f10e72225f04847fbb432ae3daae5df418d1\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-28T04:31:23Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.101\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.101\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-28T04:31:20Z\"\n }\n}\n" STEP: replace the image in the pod Aug 28 04:31:27.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2224' Aug 28 04:31:28.726: INFO: stderr: "" Aug 28 04:31:28.726: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801 Aug 28 04:31:28.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2224' Aug 28 04:31:41.597: INFO: stderr: "" Aug 28 04:31:41.598: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:31:41.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2224" for this suite. • [SLOW TEST:25.731 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792 should update a single-container pod's image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":114,"skipped":1975,"failed":0} S ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:31:41.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Aug 28 04:31:41.768: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7538" to be "success or failure" Aug 28 04:31:41.797: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 28.362474ms Aug 28 04:31:43.803: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03514125s Aug 28 04:31:45.818: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050174463s Aug 28 04:31:47.825: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056799343s STEP: Saw pod success Aug 28 04:31:47.825: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 28 04:31:47.835: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 28 04:31:47.918: INFO: Waiting for pod pod-host-path-test to disappear Aug 28 04:31:47.931: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:31:47.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7538" for this suite. • [SLOW TEST:6.332 seconds] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1976,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:31:47.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 28 04:31:54.142: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1376 PodName:pod-sharedvolume-5a300e44-40bd-47e3-93c2-12ae85c54331 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:31:54.142: INFO: >>> kubeConfig: /root/.kube/config I0828 04:31:54.202463 8 log.go:172] (0x40028b0580) (0x400183d540) Create stream I0828 04:31:54.202637 8 log.go:172] (0x40028b0580) (0x400183d540) Stream added, broadcasting: 1 I0828 04:31:54.207121 8 log.go:172] (0x40028b0580) Reply frame received for 1 I0828 04:31:54.207363 8 log.go:172] (0x40028b0580) (0x400183d680) Create stream I0828 04:31:54.207480 8 log.go:172] (0x40028b0580) (0x400183d680) Stream added, broadcasting: 3 I0828 04:31:54.209440 8 log.go:172] (0x40028b0580) Reply frame received for 3 I0828 04:31:54.209655 8 log.go:172] (0x40028b0580) (0x4000956320) Create stream I0828 04:31:54.209762 8 log.go:172] (0x40028b0580) (0x4000956320) Stream added, broadcasting: 5 I0828 04:31:54.211175 8 log.go:172] (0x40028b0580) Reply frame received for 5 I0828 04:31:54.301109 8 log.go:172] (0x40028b0580) Data frame received for 3 I0828 04:31:54.301375 8 log.go:172] (0x400183d680) (3) Data frame handling I0828 04:31:54.301540 8 log.go:172] (0x400183d680) (3) Data frame sent I0828 04:31:54.301696 8 log.go:172] (0x40028b0580) Data frame received for 3 I0828 04:31:54.301834 8 log.go:172] (0x400183d680) (3) Data frame handling I0828 04:31:54.302082 8 log.go:172] (0x40028b0580) Data frame received for 5 I0828 04:31:54.302292 8 log.go:172] (0x4000956320) (5) Data frame handling I0828 04:31:54.303343 8 log.go:172] (0x40028b0580) Data frame received for 1 I0828 04:31:54.303587 8 log.go:172] (0x400183d540) (1) Data frame handling I0828 04:31:54.303733 8 log.go:172] (0x400183d540) (1) Data frame sent I0828 04:31:54.303860 8 log.go:172] (0x40028b0580) (0x400183d540) Stream removed, broadcasting: 1 I0828 04:31:54.304044 8 log.go:172] (0x40028b0580) Go away received I0828 04:31:54.304521 8 log.go:172] (0x40028b0580) (0x400183d540) Stream removed, broadcasting: 1 I0828 04:31:54.304694 8 log.go:172] (0x40028b0580) (0x400183d680) Stream removed, broadcasting: 3 I0828 04:31:54.304965 8 log.go:172] (0x40028b0580) (0x4000956320) Stream removed, broadcasting: 5 Aug 28 04:31:54.305: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:31:54.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1376" for this suite. • [SLOW TEST:6.376 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":116,"skipped":1997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:31:54.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 28 04:31:54.517: INFO: Waiting up to 5m0s for pod "downward-api-b7bd306b-95fd-4ce7-a4d5-987531dd3450" in namespace "downward-api-7224" to be "success or failure" Aug 28 04:31:54.524: INFO: Pod "downward-api-b7bd306b-95fd-4ce7-a4d5-987531dd3450": Phase="Pending", Reason="", readiness=false. Elapsed: 6.780886ms Aug 28 04:31:56.531: INFO: Pod "downward-api-b7bd306b-95fd-4ce7-a4d5-987531dd3450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013721717s Aug 28 04:31:58.716: INFO: Pod "downward-api-b7bd306b-95fd-4ce7-a4d5-987531dd3450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198752959s STEP: Saw pod success Aug 28 04:31:58.716: INFO: Pod "downward-api-b7bd306b-95fd-4ce7-a4d5-987531dd3450" satisfied condition "success or failure" Aug 28 04:31:58.721: INFO: Trying to get logs from node jerma-worker2 pod downward-api-b7bd306b-95fd-4ce7-a4d5-987531dd3450 container dapi-container: STEP: delete the pod Aug 28 04:31:58.804: INFO: Waiting for pod downward-api-b7bd306b-95fd-4ce7-a4d5-987531dd3450 to disappear Aug 28 04:31:58.912: INFO: Pod downward-api-b7bd306b-95fd-4ce7-a4d5-987531dd3450 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:31:58.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7224" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":2073,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:31:58.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 28 04:31:59.107: INFO: Waiting up to 5m0s for pod "downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6" in namespace "downward-api-1050" to be "success or failure" Aug 28 04:31:59.141: INFO: Pod "downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.174575ms Aug 28 04:32:01.148: INFO: Pod "downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041132887s Aug 28 04:32:03.155: INFO: Pod "downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047714106s Aug 28 04:32:05.248: INFO: Pod "downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140598704s Aug 28 04:32:07.254: INFO: Pod "downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146683506s STEP: Saw pod success Aug 28 04:32:07.254: INFO: Pod "downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6" satisfied condition "success or failure" Aug 28 04:32:07.258: INFO: Trying to get logs from node jerma-worker2 pod downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6 container dapi-container: STEP: delete the pod Aug 28 04:32:07.335: INFO: Waiting for pod downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6 to disappear Aug 28 04:32:07.523: INFO: Pod downward-api-f5e25101-b084-4e2e-91f4-55a5bcb1bce6 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:32:07.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1050" for this suite. • [SLOW TEST:8.991 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":2077,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:32:07.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-493d09b9-610f-463a-ae33-5759cd74bef5 STEP: Creating secret with name s-test-opt-upd-9eab2b81-0b50-4dd7-9eef-d56f63417942 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-493d09b9-610f-463a-ae33-5759cd74bef5 STEP: Updating secret s-test-opt-upd-9eab2b81-0b50-4dd7-9eef-d56f63417942 STEP: Creating secret with name s-test-opt-create-7b3d6a3c-5b99-4dc7-ae92-f0a955c82219 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:33:23.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8797" for this suite. • [SLOW TEST:75.969 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2087,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:33:23.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 28 04:33:36.106: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:36.107: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:36.163615 8 log.go:172] (0x40029a5c30) (0x4001149ae0) Create stream I0828 04:33:36.163826 8 log.go:172] (0x40029a5c30) (0x4001149ae0) Stream added, broadcasting: 1 I0828 04:33:36.168247 8 log.go:172] (0x40029a5c30) Reply frame received for 1 I0828 04:33:36.168472 8 log.go:172] (0x40029a5c30) (0x4001149e00) Create stream I0828 04:33:36.168602 8 log.go:172] (0x40029a5c30) (0x4001149e00) Stream added, broadcasting: 3 I0828 04:33:36.170177 8 log.go:172] (0x40029a5c30) Reply frame received for 3 I0828 04:33:36.170405 8 log.go:172] (0x40029a5c30) (0x4001481b80) Create stream I0828 04:33:36.170493 8 log.go:172] (0x40029a5c30) (0x4001481b80) Stream added, broadcasting: 5 I0828 04:33:36.172100 8 log.go:172] (0x40029a5c30) Reply frame received for 5 I0828 04:33:36.241658 8 log.go:172] (0x40029a5c30) Data frame received for 3 I0828 04:33:36.241804 8 log.go:172] (0x4001149e00) (3) Data frame handling I0828 04:33:36.241923 8 log.go:172] (0x40029a5c30) Data frame received for 5 I0828 04:33:36.242092 8 log.go:172] (0x4001481b80) (5) Data frame handling I0828 04:33:36.242253 8 log.go:172] (0x4001149e00) (3) Data frame sent I0828 04:33:36.242402 8 log.go:172] (0x40029a5c30) Data frame received for 3 I0828 04:33:36.242531 8 log.go:172] (0x4001149e00) (3) Data frame handling I0828 04:33:36.242853 8 log.go:172] (0x40029a5c30) Data frame received for 1 I0828 04:33:36.242976 8 log.go:172] (0x4001149ae0) (1) Data frame handling I0828 04:33:36.243117 8 log.go:172] (0x4001149ae0) (1) Data frame sent I0828 04:33:36.243238 8 log.go:172] (0x40029a5c30) (0x4001149ae0) Stream removed, broadcasting: 1 I0828 04:33:36.243345 8 log.go:172] (0x40029a5c30) Go away received I0828 04:33:36.243703 8 log.go:172] (0x40029a5c30) (0x4001149ae0) Stream removed, broadcasting: 1 I0828 04:33:36.243845 8 log.go:172] (0x40029a5c30) (0x4001149e00) Stream removed, broadcasting: 3 I0828 04:33:36.243938 8 log.go:172] (0x40029a5c30) (0x4001481b80) Stream removed, broadcasting: 5 Aug 28 04:33:36.243: INFO: Exec stderr: "" Aug 28 04:33:36.244: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:36.244: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:36.296564 8 log.go:172] (0x40020b6580) (0x4000adba40) Create stream I0828 04:33:36.296705 8 log.go:172] (0x40020b6580) (0x4000adba40) Stream added, broadcasting: 1 I0828 04:33:36.299945 8 log.go:172] (0x40020b6580) Reply frame received for 1 I0828 04:33:36.300057 8 log.go:172] (0x40020b6580) (0x400183db80) Create stream I0828 04:33:36.300116 8 log.go:172] (0x40020b6580) (0x400183db80) Stream added, broadcasting: 3 I0828 04:33:36.301220 8 log.go:172] (0x40020b6580) Reply frame received for 3 I0828 04:33:36.301344 8 log.go:172] (0x40020b6580) (0x4001e2b180) Create stream I0828 04:33:36.301411 8 log.go:172] (0x40020b6580) (0x4001e2b180) Stream added, broadcasting: 5 I0828 04:33:36.302860 8 log.go:172] (0x40020b6580) Reply frame received for 5 I0828 04:33:36.369982 8 log.go:172] (0x40020b6580) Data frame received for 5 I0828 04:33:36.370137 8 log.go:172] (0x4001e2b180) (5) Data frame handling I0828 04:33:36.370352 8 log.go:172] (0x40020b6580) Data frame received for 3 I0828 04:33:36.370558 8 log.go:172] (0x400183db80) (3) Data frame handling I0828 04:33:36.370691 8 log.go:172] (0x400183db80) (3) Data frame sent I0828 04:33:36.370779 8 log.go:172] (0x40020b6580) Data frame received for 3 I0828 04:33:36.370854 8 log.go:172] (0x400183db80) (3) Data frame handling I0828 04:33:36.371519 8 log.go:172] (0x40020b6580) Data frame received for 1 I0828 04:33:36.371644 8 log.go:172] (0x4000adba40) (1) Data frame handling I0828 04:33:36.371751 8 log.go:172] (0x4000adba40) (1) Data frame sent I0828 04:33:36.371858 8 log.go:172] (0x40020b6580) (0x4000adba40) Stream removed, broadcasting: 1 I0828 04:33:36.372010 8 log.go:172] (0x40020b6580) Go away received I0828 04:33:36.372493 8 log.go:172] (0x40020b6580) (0x4000adba40) Stream removed, broadcasting: 1 I0828 04:33:36.372649 8 log.go:172] (0x40020b6580) (0x400183db80) Stream removed, broadcasting: 3 I0828 04:33:36.372891 8 log.go:172] (0x40020b6580) (0x4001e2b180) Stream removed, broadcasting: 5 Aug 28 04:33:36.372: INFO: Exec stderr: "" Aug 28 04:33:36.373: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:36.373: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:36.442015 8 log.go:172] (0x4002cae840) (0x40019415e0) Create stream I0828 04:33:36.442199 8 log.go:172] (0x4002cae840) (0x40019415e0) Stream added, broadcasting: 1 I0828 04:33:36.447051 8 log.go:172] (0x4002cae840) Reply frame received for 1 I0828 04:33:36.447217 8 log.go:172] (0x4002cae840) (0x4000404f00) Create stream I0828 04:33:36.447286 8 log.go:172] (0x4002cae840) (0x4000404f00) Stream added, broadcasting: 3 I0828 04:33:36.449262 8 log.go:172] (0x4002cae840) Reply frame received for 3 I0828 04:33:36.449533 8 log.go:172] (0x4002cae840) (0x4000405900) Create stream I0828 04:33:36.449650 8 log.go:172] (0x4002cae840) (0x4000405900) Stream added, broadcasting: 5 I0828 04:33:36.451528 8 log.go:172] (0x4002cae840) Reply frame received for 5 I0828 04:33:36.512150 8 log.go:172] (0x4002cae840) Data frame received for 5 I0828 04:33:36.512342 8 log.go:172] (0x4000405900) (5) Data frame handling I0828 04:33:36.512534 8 log.go:172] (0x4002cae840) Data frame received for 3 I0828 04:33:36.512939 8 log.go:172] (0x4000404f00) (3) Data frame handling I0828 04:33:36.513183 8 log.go:172] (0x4000404f00) (3) Data frame sent I0828 04:33:36.513364 8 log.go:172] (0x4002cae840) Data frame received for 3 I0828 04:33:36.513521 8 log.go:172] (0x4000404f00) (3) Data frame handling I0828 04:33:36.513722 8 log.go:172] (0x4002cae840) Data frame received for 1 I0828 04:33:36.513842 8 log.go:172] (0x40019415e0) (1) Data frame handling I0828 04:33:36.513958 8 log.go:172] (0x40019415e0) (1) Data frame sent I0828 04:33:36.514071 8 log.go:172] (0x4002cae840) (0x40019415e0) Stream removed, broadcasting: 1 I0828 04:33:36.514274 8 log.go:172] (0x4002cae840) Go away received I0828 04:33:36.514506 8 log.go:172] (0x4002cae840) (0x40019415e0) Stream removed, broadcasting: 1 I0828 04:33:36.514630 8 log.go:172] (0x4002cae840) (0x4000404f00) Stream removed, broadcasting: 3 I0828 04:33:36.514701 8 log.go:172] (0x4002cae840) (0x4000405900) Stream removed, broadcasting: 5 Aug 28 04:33:36.514: INFO: Exec stderr: "" Aug 28 04:33:36.514: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:36.514: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:36.579996 8 log.go:172] (0x40028b0bb0) (0x4000cfc500) Create stream I0828 04:33:36.580141 8 log.go:172] (0x40028b0bb0) (0x4000cfc500) Stream added, broadcasting: 1 I0828 04:33:36.584831 8 log.go:172] (0x40028b0bb0) Reply frame received for 1 I0828 04:33:36.585075 8 log.go:172] (0x40028b0bb0) (0x4000cfc960) Create stream I0828 04:33:36.585180 8 log.go:172] (0x40028b0bb0) (0x4000cfc960) Stream added, broadcasting: 3 I0828 04:33:36.587269 8 log.go:172] (0x40028b0bb0) Reply frame received for 3 I0828 04:33:36.587515 8 log.go:172] (0x40028b0bb0) (0x4000cfcbe0) Create stream I0828 04:33:36.587614 8 log.go:172] (0x40028b0bb0) (0x4000cfcbe0) Stream added, broadcasting: 5 I0828 04:33:36.589465 8 log.go:172] (0x40028b0bb0) Reply frame received for 5 I0828 04:33:36.668654 8 log.go:172] (0x40028b0bb0) Data frame received for 3 I0828 04:33:36.668844 8 log.go:172] (0x4000cfc960) (3) Data frame handling I0828 04:33:36.668951 8 log.go:172] (0x4000cfc960) (3) Data frame sent I0828 04:33:36.669132 8 log.go:172] (0x40028b0bb0) Data frame received for 5 I0828 04:33:36.669305 8 log.go:172] (0x4000cfcbe0) (5) Data frame handling I0828 04:33:36.669375 8 log.go:172] (0x40028b0bb0) Data frame received for 3 I0828 04:33:36.669440 8 log.go:172] (0x4000cfc960) (3) Data frame handling I0828 04:33:36.669960 8 log.go:172] (0x40028b0bb0) Data frame received for 1 I0828 04:33:36.670060 8 log.go:172] (0x4000cfc500) (1) Data frame handling I0828 04:33:36.670246 8 log.go:172] (0x4000cfc500) (1) Data frame sent I0828 04:33:36.670362 8 log.go:172] (0x40028b0bb0) (0x4000cfc500) Stream removed, broadcasting: 1 I0828 04:33:36.670529 8 log.go:172] (0x40028b0bb0) Go away received I0828 04:33:36.671019 8 log.go:172] (0x40028b0bb0) (0x4000cfc500) Stream removed, broadcasting: 1 I0828 04:33:36.671187 8 log.go:172] (0x40028b0bb0) (0x4000cfc960) Stream removed, broadcasting: 3 I0828 04:33:36.671344 8 log.go:172] (0x40028b0bb0) (0x4000cfcbe0) Stream removed, broadcasting: 5 Aug 28 04:33:36.671: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 28 04:33:36.671: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:36.671: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:36.728939 8 log.go:172] (0x4002caee70) (0x4001941b80) Create stream I0828 04:33:36.729068 8 log.go:172] (0x4002caee70) (0x4001941b80) Stream added, broadcasting: 1 I0828 04:33:36.732917 8 log.go:172] (0x4002caee70) Reply frame received for 1 I0828 04:33:36.733082 8 log.go:172] (0x4002caee70) (0x4000cfcd20) Create stream I0828 04:33:36.733161 8 log.go:172] (0x4002caee70) (0x4000cfcd20) Stream added, broadcasting: 3 I0828 04:33:36.734526 8 log.go:172] (0x4002caee70) Reply frame received for 3 I0828 04:33:36.734675 8 log.go:172] (0x4002caee70) (0x4001941cc0) Create stream I0828 04:33:36.734754 8 log.go:172] (0x4002caee70) (0x4001941cc0) Stream added, broadcasting: 5 I0828 04:33:36.735907 8 log.go:172] (0x4002caee70) Reply frame received for 5 I0828 04:33:36.813173 8 log.go:172] (0x4002caee70) Data frame received for 3 I0828 04:33:36.813358 8 log.go:172] (0x4000cfcd20) (3) Data frame handling I0828 04:33:36.813491 8 log.go:172] (0x4002caee70) Data frame received for 5 I0828 04:33:36.813619 8 log.go:172] (0x4001941cc0) (5) Data frame handling I0828 04:33:36.813741 8 log.go:172] (0x4000cfcd20) (3) Data frame sent I0828 04:33:36.813852 8 log.go:172] (0x4002caee70) Data frame received for 3 I0828 04:33:36.813952 8 log.go:172] (0x4000cfcd20) (3) Data frame handling I0828 04:33:36.814647 8 log.go:172] (0x4002caee70) Data frame received for 1 I0828 04:33:36.814781 8 log.go:172] (0x4001941b80) (1) Data frame handling I0828 04:33:36.814895 8 log.go:172] (0x4001941b80) (1) Data frame sent I0828 04:33:36.815018 8 log.go:172] (0x4002caee70) (0x4001941b80) Stream removed, broadcasting: 1 I0828 04:33:36.815202 8 log.go:172] (0x4002caee70) Go away received I0828 04:33:36.815455 8 log.go:172] (0x4002caee70) (0x4001941b80) Stream removed, broadcasting: 1 I0828 04:33:36.815577 8 log.go:172] (0x4002caee70) (0x4000cfcd20) Stream removed, broadcasting: 3 I0828 04:33:36.815674 8 log.go:172] (0x4002caee70) (0x4001941cc0) Stream removed, broadcasting: 5 Aug 28 04:33:36.815: INFO: Exec stderr: "" Aug 28 04:33:36.815: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:36.816: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:36.911527 8 log.go:172] (0x40028744d0) (0x4001058d20) Create stream I0828 04:33:36.911674 8 log.go:172] (0x40028744d0) (0x4001058d20) Stream added, broadcasting: 1 I0828 04:33:36.915752 8 log.go:172] (0x40028744d0) Reply frame received for 1 I0828 04:33:36.916054 8 log.go:172] (0x40028744d0) (0x4001059040) Create stream I0828 04:33:36.916188 8 log.go:172] (0x40028744d0) (0x4001059040) Stream added, broadcasting: 3 I0828 04:33:36.918114 8 log.go:172] (0x40028744d0) Reply frame received for 3 I0828 04:33:36.918293 8 log.go:172] (0x40028744d0) (0x40010599a0) Create stream I0828 04:33:36.918423 8 log.go:172] (0x40028744d0) (0x40010599a0) Stream added, broadcasting: 5 I0828 04:33:36.919651 8 log.go:172] (0x40028744d0) Reply frame received for 5 I0828 04:33:36.975430 8 log.go:172] (0x40028744d0) Data frame received for 3 I0828 04:33:36.975596 8 log.go:172] (0x4001059040) (3) Data frame handling I0828 04:33:36.975689 8 log.go:172] (0x4001059040) (3) Data frame sent I0828 04:33:36.975760 8 log.go:172] (0x40028744d0) Data frame received for 3 I0828 04:33:36.975834 8 log.go:172] (0x4001059040) (3) Data frame handling I0828 04:33:36.975973 8 log.go:172] (0x40028744d0) Data frame received for 5 I0828 04:33:36.976062 8 log.go:172] (0x40010599a0) (5) Data frame handling I0828 04:33:36.977110 8 log.go:172] (0x40028744d0) Data frame received for 1 I0828 04:33:36.977265 8 log.go:172] (0x4001058d20) (1) Data frame handling I0828 04:33:36.977422 8 log.go:172] (0x4001058d20) (1) Data frame sent I0828 04:33:36.977545 8 log.go:172] (0x40028744d0) (0x4001058d20) Stream removed, broadcasting: 1 I0828 04:33:36.977692 8 log.go:172] (0x40028744d0) Go away received I0828 04:33:36.978151 8 log.go:172] (0x40028744d0) (0x4001058d20) Stream removed, broadcasting: 1 I0828 04:33:36.978277 8 log.go:172] (0x40028744d0) (0x4001059040) Stream removed, broadcasting: 3 I0828 04:33:36.978372 8 log.go:172] (0x40028744d0) (0x40010599a0) Stream removed, broadcasting: 5 Aug 28 04:33:36.978: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 28 04:33:36.978: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:36.978: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:37.050033 8 log.go:172] (0x40028b11e0) (0x4000cfd900) Create stream I0828 04:33:37.050322 8 log.go:172] (0x40028b11e0) (0x4000cfd900) Stream added, broadcasting: 1 I0828 04:33:37.054192 8 log.go:172] (0x40028b11e0) Reply frame received for 1 I0828 04:33:37.054343 8 log.go:172] (0x40028b11e0) (0x4000adbcc0) Create stream I0828 04:33:37.054411 8 log.go:172] (0x40028b11e0) (0x4000adbcc0) Stream added, broadcasting: 3 I0828 04:33:37.055792 8 log.go:172] (0x40028b11e0) Reply frame received for 3 I0828 04:33:37.055954 8 log.go:172] (0x40028b11e0) (0x4000550dc0) Create stream I0828 04:33:37.056063 8 log.go:172] (0x40028b11e0) (0x4000550dc0) Stream added, broadcasting: 5 I0828 04:33:37.057651 8 log.go:172] (0x40028b11e0) Reply frame received for 5 I0828 04:33:37.124873 8 log.go:172] (0x40028b11e0) Data frame received for 5 I0828 04:33:37.125019 8 log.go:172] (0x4000550dc0) (5) Data frame handling I0828 04:33:37.125155 8 log.go:172] (0x40028b11e0) Data frame received for 3 I0828 04:33:37.125286 8 log.go:172] (0x4000adbcc0) (3) Data frame handling I0828 04:33:37.125419 8 log.go:172] (0x4000adbcc0) (3) Data frame sent I0828 04:33:37.125552 8 log.go:172] (0x40028b11e0) Data frame received for 3 I0828 04:33:37.125649 8 log.go:172] (0x4000adbcc0) (3) Data frame handling I0828 04:33:37.126157 8 log.go:172] (0x40028b11e0) Data frame received for 1 I0828 04:33:37.126266 8 log.go:172] (0x4000cfd900) (1) Data frame handling I0828 04:33:37.126373 8 log.go:172] (0x4000cfd900) (1) Data frame sent I0828 04:33:37.126482 8 log.go:172] (0x40028b11e0) (0x4000cfd900) Stream removed, broadcasting: 1 I0828 04:33:37.126595 8 log.go:172] (0x40028b11e0) Go away received I0828 04:33:37.126986 8 log.go:172] (0x40028b11e0) (0x4000cfd900) Stream removed, broadcasting: 1 I0828 04:33:37.127074 8 log.go:172] (0x40028b11e0) (0x4000adbcc0) Stream removed, broadcasting: 3 I0828 04:33:37.127139 8 log.go:172] (0x40028b11e0) (0x4000550dc0) Stream removed, broadcasting: 5 Aug 28 04:33:37.127: INFO: Exec stderr: "" Aug 28 04:33:37.127: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:37.127: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:37.180816 8 log.go:172] (0x400276a840) (0x4001e2b680) Create stream I0828 04:33:37.180964 8 log.go:172] (0x400276a840) (0x4001e2b680) Stream added, broadcasting: 1 I0828 04:33:37.184998 8 log.go:172] (0x400276a840) Reply frame received for 1 I0828 04:33:37.185231 8 log.go:172] (0x400276a840) (0x4001e2b720) Create stream I0828 04:33:37.185359 8 log.go:172] (0x400276a840) (0x4001e2b720) Stream added, broadcasting: 3 I0828 04:33:37.187243 8 log.go:172] (0x400276a840) Reply frame received for 3 I0828 04:33:37.187358 8 log.go:172] (0x400276a840) (0x4000551ea0) Create stream I0828 04:33:37.187419 8 log.go:172] (0x400276a840) (0x4000551ea0) Stream added, broadcasting: 5 I0828 04:33:37.188615 8 log.go:172] (0x400276a840) Reply frame received for 5 I0828 04:33:37.245326 8 log.go:172] (0x400276a840) Data frame received for 5 I0828 04:33:37.245476 8 log.go:172] (0x4000551ea0) (5) Data frame handling I0828 04:33:37.245624 8 log.go:172] (0x400276a840) Data frame received for 3 I0828 04:33:37.245759 8 log.go:172] (0x4001e2b720) (3) Data frame handling I0828 04:33:37.245970 8 log.go:172] (0x4001e2b720) (3) Data frame sent I0828 04:33:37.246103 8 log.go:172] (0x400276a840) Data frame received for 3 I0828 04:33:37.246238 8 log.go:172] (0x4001e2b720) (3) Data frame handling I0828 04:33:37.246625 8 log.go:172] (0x400276a840) Data frame received for 1 I0828 04:33:37.246693 8 log.go:172] (0x4001e2b680) (1) Data frame handling I0828 04:33:37.246783 8 log.go:172] (0x4001e2b680) (1) Data frame sent I0828 04:33:37.246863 8 log.go:172] (0x400276a840) (0x4001e2b680) Stream removed, broadcasting: 1 I0828 04:33:37.246958 8 log.go:172] (0x400276a840) Go away received I0828 04:33:37.247268 8 log.go:172] (0x400276a840) (0x4001e2b680) Stream removed, broadcasting: 1 I0828 04:33:37.247361 8 log.go:172] (0x400276a840) (0x4001e2b720) Stream removed, broadcasting: 3 I0828 04:33:37.247433 8 log.go:172] (0x400276a840) (0x4000551ea0) Stream removed, broadcasting: 5 Aug 28 04:33:37.247: INFO: Exec stderr: "" Aug 28 04:33:37.247: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:37.247: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:37.301510 8 log.go:172] (0x40020b6bb0) (0x4001cfa1e0) Create stream I0828 04:33:37.301696 8 log.go:172] (0x40020b6bb0) (0x4001cfa1e0) Stream added, broadcasting: 1 I0828 04:33:37.304804 8 log.go:172] (0x40020b6bb0) Reply frame received for 1 I0828 04:33:37.304942 8 log.go:172] (0x40020b6bb0) (0x4001e2b7c0) Create stream I0828 04:33:37.304999 8 log.go:172] (0x40020b6bb0) (0x4001e2b7c0) Stream added, broadcasting: 3 I0828 04:33:37.306036 8 log.go:172] (0x40020b6bb0) Reply frame received for 3 I0828 04:33:37.306148 8 log.go:172] (0x40020b6bb0) (0x4001cfa280) Create stream I0828 04:33:37.306207 8 log.go:172] (0x40020b6bb0) (0x4001cfa280) Stream added, broadcasting: 5 I0828 04:33:37.307260 8 log.go:172] (0x40020b6bb0) Reply frame received for 5 I0828 04:33:37.366917 8 log.go:172] (0x40020b6bb0) Data frame received for 3 I0828 04:33:37.367077 8 log.go:172] (0x4001e2b7c0) (3) Data frame handling I0828 04:33:37.367178 8 log.go:172] (0x40020b6bb0) Data frame received for 5 I0828 04:33:37.367346 8 log.go:172] (0x4001cfa280) (5) Data frame handling I0828 04:33:37.367549 8 log.go:172] (0x4001e2b7c0) (3) Data frame sent I0828 04:33:37.367680 8 log.go:172] (0x40020b6bb0) Data frame received for 3 I0828 04:33:37.367783 8 log.go:172] (0x4001e2b7c0) (3) Data frame handling I0828 04:33:37.368017 8 log.go:172] (0x40020b6bb0) Data frame received for 1 I0828 04:33:37.368093 8 log.go:172] (0x4001cfa1e0) (1) Data frame handling I0828 04:33:37.368174 8 log.go:172] (0x4001cfa1e0) (1) Data frame sent I0828 04:33:37.368338 8 log.go:172] (0x40020b6bb0) (0x4001cfa1e0) Stream removed, broadcasting: 1 I0828 04:33:37.368494 8 log.go:172] (0x40020b6bb0) Go away received I0828 04:33:37.368846 8 log.go:172] (0x40020b6bb0) (0x4001cfa1e0) Stream removed, broadcasting: 1 I0828 04:33:37.369000 8 log.go:172] (0x40020b6bb0) (0x4001e2b7c0) Stream removed, broadcasting: 3 I0828 04:33:37.369093 8 log.go:172] (0x40020b6bb0) (0x4001cfa280) Stream removed, broadcasting: 5 Aug 28 04:33:37.369: INFO: Exec stderr: "" Aug 28 04:33:37.369: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8256 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:33:37.369: INFO: >>> kubeConfig: /root/.kube/config I0828 04:33:37.430199 8 log.go:172] (0x4002a302c0) (0x40023e81e0) Create stream I0828 04:33:37.430333 8 log.go:172] (0x4002a302c0) (0x40023e81e0) Stream added, broadcasting: 1 I0828 04:33:37.434313 8 log.go:172] (0x4002a302c0) Reply frame received for 1 I0828 04:33:37.434470 8 log.go:172] (0x4002a302c0) (0x40023e8280) Create stream I0828 04:33:37.434551 8 log.go:172] (0x4002a302c0) (0x40023e8280) Stream added, broadcasting: 3 I0828 04:33:37.436298 8 log.go:172] (0x4002a302c0) Reply frame received for 3 I0828 04:33:37.436491 8 log.go:172] (0x4002a302c0) (0x4001e2b9a0) Create stream I0828 04:33:37.436617 8 log.go:172] (0x4002a302c0) (0x4001e2b9a0) Stream added, broadcasting: 5 I0828 04:33:37.438360 8 log.go:172] (0x4002a302c0) Reply frame received for 5 I0828 04:33:37.507522 8 log.go:172] (0x4002a302c0) Data frame received for 3 I0828 04:33:37.507693 8 log.go:172] (0x40023e8280) (3) Data frame handling I0828 04:33:37.507849 8 log.go:172] (0x4002a302c0) Data frame received for 5 I0828 04:33:37.508082 8 log.go:172] (0x4001e2b9a0) (5) Data frame handling I0828 04:33:37.508350 8 log.go:172] (0x40023e8280) (3) Data frame sent I0828 04:33:37.508516 8 log.go:172] (0x4002a302c0) Data frame received for 3 I0828 04:33:37.508692 8 log.go:172] (0x40023e8280) (3) Data frame handling I0828 04:33:37.509033 8 log.go:172] (0x4002a302c0) Data frame received for 1 I0828 04:33:37.509172 8 log.go:172] (0x40023e81e0) (1) Data frame handling I0828 04:33:37.509346 8 log.go:172] (0x40023e81e0) (1) Data frame sent I0828 04:33:37.509493 8 log.go:172] (0x4002a302c0) (0x40023e81e0) Stream removed, broadcasting: 1 I0828 04:33:37.509644 8 log.go:172] (0x4002a302c0) Go away received I0828 04:33:37.510003 8 log.go:172] (0x4002a302c0) (0x40023e81e0) Stream removed, broadcasting: 1 I0828 04:33:37.510165 8 log.go:172] (0x4002a302c0) (0x40023e8280) Stream removed, broadcasting: 3 I0828 04:33:37.510298 8 log.go:172] (0x4002a302c0) (0x4001e2b9a0) Stream removed, broadcasting: 5 Aug 28 04:33:37.510: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:33:37.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8256" for this suite. • [SLOW TEST:13.632 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2089,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:33:37.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Aug 28 04:33:37.596: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:35:24.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4454" for this suite. • [SLOW TEST:107.095 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":121,"skipped":2092,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:35:24.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:35:28.232: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:35:30.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186128, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186128, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186128, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186128, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:35:33.600: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:35:33.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1596-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:35:34.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-873" for this suite. STEP: Destroying namespace "webhook-873-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.928 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":122,"skipped":2094,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:35:34.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 28 04:35:34.705: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:34.715: INFO: Number of nodes with available pods: 0 Aug 28 04:35:34.715: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:35:35.809: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:35.814: INFO: Number of nodes with available pods: 0 Aug 28 04:35:35.814: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:35:36.759: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:36.765: INFO: Number of nodes with available pods: 0 Aug 28 04:35:36.765: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:35:37.724: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:37.729: INFO: Number of nodes with available pods: 0 Aug 28 04:35:37.729: INFO: Node jerma-worker is running more than one daemon pod Aug 28 04:35:38.723: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:38.729: INFO: Number of nodes with available pods: 2 Aug 28 04:35:38.729: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 28 04:35:38.815: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:38.848: INFO: Number of nodes with available pods: 1 Aug 28 04:35:38.848: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:35:39.877: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:39.882: INFO: Number of nodes with available pods: 1 Aug 28 04:35:39.882: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:35:40.893: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:40.899: INFO: Number of nodes with available pods: 1 Aug 28 04:35:40.899: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:35:41.973: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:42.048: INFO: Number of nodes with available pods: 1 Aug 28 04:35:42.048: INFO: Node jerma-worker2 is running more than one daemon pod Aug 28 04:35:42.931: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 04:35:42.945: INFO: Number of nodes with available pods: 2 Aug 28 04:35:42.945: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6654, will wait for the garbage collector to delete the pods Aug 28 04:35:43.018: INFO: Deleting DaemonSet.extensions daemon-set took: 8.980696ms Aug 28 04:35:43.119: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.784339ms Aug 28 04:35:51.825: INFO: Number of nodes with available pods: 0 Aug 28 04:35:51.825: INFO: Number of running nodes: 0, number of available pods: 0 Aug 28 04:35:51.830: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6654/daemonsets","resourceVersion":"4488687"},"items":null} Aug 28 04:35:51.859: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6654/pods","resourceVersion":"4488687"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:35:51.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6654" for this suite. • [SLOW TEST:17.350 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":123,"skipped":2099,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:35:51.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:35:52.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad5efe39-3dd1-4e60-b270-525b5a9aeb04" in namespace "downward-api-2718" to be "success or failure" Aug 28 04:35:52.190: INFO: Pod "downwardapi-volume-ad5efe39-3dd1-4e60-b270-525b5a9aeb04": Phase="Pending", Reason="", readiness=false. Elapsed: 99.909608ms Aug 28 04:35:54.213: INFO: Pod "downwardapi-volume-ad5efe39-3dd1-4e60-b270-525b5a9aeb04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12286591s Aug 28 04:35:56.237: INFO: Pod "downwardapi-volume-ad5efe39-3dd1-4e60-b270-525b5a9aeb04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146351997s STEP: Saw pod success Aug 28 04:35:56.237: INFO: Pod "downwardapi-volume-ad5efe39-3dd1-4e60-b270-525b5a9aeb04" satisfied condition "success or failure" Aug 28 04:35:56.243: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ad5efe39-3dd1-4e60-b270-525b5a9aeb04 container client-container: STEP: delete the pod Aug 28 04:35:56.393: INFO: Waiting for pod downwardapi-volume-ad5efe39-3dd1-4e60-b270-525b5a9aeb04 to disappear Aug 28 04:35:56.402: INFO: Pod downwardapi-volume-ad5efe39-3dd1-4e60-b270-525b5a9aeb04 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:35:56.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2718" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:35:56.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:35:58.150: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:36:00.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186158, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186158, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186158, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186158, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:36:03.210: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:36:03.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4101" for this suite. STEP: Destroying namespace "webhook-4101-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.955 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":125,"skipped":2138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:36:03.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:36:20.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8469" for this suite. • [SLOW TEST:17.374 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":126,"skipped":2201,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:36:20.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 28 04:36:20.917: INFO: Waiting up to 5m0s for pod "pod-cdbb955c-0f72-40e4-83e7-9279725367c7" in namespace "emptydir-1478" to be "success or failure" Aug 28 04:36:20.934: INFO: Pod "pod-cdbb955c-0f72-40e4-83e7-9279725367c7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.313778ms Aug 28 04:36:22.940: INFO: Pod "pod-cdbb955c-0f72-40e4-83e7-9279725367c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022799077s Aug 28 04:36:24.947: INFO: Pod "pod-cdbb955c-0f72-40e4-83e7-9279725367c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03008716s Aug 28 04:36:26.980: INFO: Pod "pod-cdbb955c-0f72-40e4-83e7-9279725367c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063083266s STEP: Saw pod success Aug 28 04:36:26.981: INFO: Pod "pod-cdbb955c-0f72-40e4-83e7-9279725367c7" satisfied condition "success or failure" Aug 28 04:36:26.986: INFO: Trying to get logs from node jerma-worker pod pod-cdbb955c-0f72-40e4-83e7-9279725367c7 container test-container: STEP: delete the pod Aug 28 04:36:27.013: INFO: Waiting for pod pod-cdbb955c-0f72-40e4-83e7-9279725367c7 to disappear Aug 28 04:36:27.041: INFO: Pod pod-cdbb955c-0f72-40e4-83e7-9279725367c7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:36:27.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1478" for this suite. • [SLOW TEST:6.283 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:36:27.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f7a06c36-ebe8-4b3f-8ef1-24d2806fead9 STEP: Creating a pod to test consume secrets Aug 28 04:36:27.325: INFO: Waiting up to 5m0s for pod "pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3" in namespace "secrets-6856" to be "success or failure" Aug 28 04:36:27.346: INFO: Pod "pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.602893ms Aug 28 04:36:29.353: INFO: Pod "pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028412239s Aug 28 04:36:31.388: INFO: Pod "pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062691094s Aug 28 04:36:33.396: INFO: Pod "pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070618892s STEP: Saw pod success Aug 28 04:36:33.396: INFO: Pod "pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3" satisfied condition "success or failure" Aug 28 04:36:33.401: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3 container secret-volume-test: STEP: delete the pod Aug 28 04:36:33.426: INFO: Waiting for pod pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3 to disappear Aug 28 04:36:33.430: INFO: Pod pod-secrets-1746b8fb-c3f8-4607-980a-87cccf594cf3 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:36:33.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6856" for this suite. STEP: Destroying namespace "secret-namespace-5972" for this suite. • [SLOW TEST:6.439 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2241,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:36:33.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-4aa0d49c-50a1-457a-9590-76a01fe09211 in namespace container-probe-721 Aug 28 04:36:39.882: INFO: Started pod liveness-4aa0d49c-50a1-457a-9590-76a01fe09211 in namespace container-probe-721 STEP: checking the pod's current state and verifying that restartCount is present Aug 28 04:36:40.413: INFO: Initial restart count of pod liveness-4aa0d49c-50a1-457a-9590-76a01fe09211 is 0 Aug 28 04:36:56.807: INFO: Restart count of pod container-probe-721/liveness-4aa0d49c-50a1-457a-9590-76a01fe09211 is now 1 (16.39376743s elapsed) Aug 28 04:37:14.867: INFO: Restart count of pod container-probe-721/liveness-4aa0d49c-50a1-457a-9590-76a01fe09211 is now 2 (34.454244762s elapsed) Aug 28 04:37:35.395: INFO: Restart count of pod container-probe-721/liveness-4aa0d49c-50a1-457a-9590-76a01fe09211 is now 3 (54.982259575s elapsed) Aug 28 04:37:53.454: INFO: Restart count of pod container-probe-721/liveness-4aa0d49c-50a1-457a-9590-76a01fe09211 is now 4 (1m13.040732547s elapsed) Aug 28 04:39:06.023: INFO: Restart count of pod container-probe-721/liveness-4aa0d49c-50a1-457a-9590-76a01fe09211 is now 5 (2m25.610237462s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:39:06.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-721" for this suite. • [SLOW TEST:152.776 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2254,"failed":0} SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:39:06.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-728156d0-ed7b-44d4-baf2-f32b86611576 STEP: Creating secret with name secret-projected-all-test-volume-e3df685a-d0e9-4342-b6ba-209e55c13ae1 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 28 04:39:07.989: INFO: Waiting up to 5m0s for pod "projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65" in namespace "projected-8790" to be "success or failure" Aug 28 04:39:08.144: INFO: Pod "projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65": Phase="Pending", Reason="", readiness=false. Elapsed: 154.953798ms Aug 28 04:39:10.153: INFO: Pod "projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163606688s Aug 28 04:39:12.262: INFO: Pod "projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273018256s Aug 28 04:39:14.268: INFO: Pod "projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.278927547s STEP: Saw pod success Aug 28 04:39:14.268: INFO: Pod "projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65" satisfied condition "success or failure" Aug 28 04:39:14.272: INFO: Trying to get logs from node jerma-worker pod projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65 container projected-all-volume-test: STEP: delete the pod Aug 28 04:39:14.310: INFO: Waiting for pod projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65 to disappear Aug 28 04:39:14.337: INFO: Pod projected-volume-07fd14f5-dbf6-4fa9-a342-79d6dd4d4b65 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:39:14.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8790" for this suite. • [SLOW TEST:8.102 seconds] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2257,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:39:14.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5628 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5628 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5628 Aug 28 04:39:14.502: INFO: Found 0 stateful pods, waiting for 1 Aug 28 04:39:24.509: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 28 04:39:24.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5628 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 04:39:26.041: INFO: stderr: "I0828 04:39:25.869469 3110 log.go:172] (0x4000adec60) (0x400072c1e0) Create stream\nI0828 04:39:25.871830 3110 log.go:172] (0x4000adec60) (0x400072c1e0) Stream added, broadcasting: 1\nI0828 04:39:25.883245 3110 log.go:172] (0x4000adec60) Reply frame received for 1\nI0828 04:39:25.884121 3110 log.go:172] (0x4000adec60) (0x40007d6000) Create stream\nI0828 04:39:25.884204 3110 log.go:172] (0x4000adec60) (0x40007d6000) Stream added, broadcasting: 3\nI0828 04:39:25.885857 3110 log.go:172] (0x4000adec60) Reply frame received for 3\nI0828 04:39:25.886293 3110 log.go:172] (0x4000adec60) (0x400072c280) Create stream\nI0828 04:39:25.886415 3110 log.go:172] (0x4000adec60) (0x400072c280) Stream added, broadcasting: 5\nI0828 04:39:25.887808 3110 log.go:172] (0x4000adec60) Reply frame received for 5\nI0828 04:39:25.981931 3110 log.go:172] (0x4000adec60) Data frame received for 5\nI0828 04:39:25.982179 3110 log.go:172] (0x400072c280) (5) Data frame handling\nI0828 04:39:25.982596 3110 log.go:172] (0x400072c280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 04:39:26.016813 3110 log.go:172] (0x4000adec60) Data frame received for 3\nI0828 04:39:26.016961 3110 log.go:172] (0x40007d6000) (3) Data frame handling\nI0828 04:39:26.017103 3110 log.go:172] (0x40007d6000) (3) Data frame sent\nI0828 04:39:26.017408 3110 log.go:172] (0x4000adec60) Data frame received for 5\nI0828 04:39:26.017564 3110 log.go:172] (0x4000adec60) Data frame received for 3\nI0828 04:39:26.017746 3110 log.go:172] (0x40007d6000) (3) Data frame handling\nI0828 04:39:26.017971 3110 log.go:172] (0x400072c280) (5) Data frame handling\nI0828 04:39:26.019520 3110 log.go:172] (0x4000adec60) Data frame received for 1\nI0828 04:39:26.019637 3110 log.go:172] (0x400072c1e0) (1) Data frame handling\nI0828 04:39:26.019753 3110 log.go:172] (0x400072c1e0) (1) Data frame sent\nI0828 04:39:26.021405 3110 log.go:172] (0x4000adec60) (0x400072c1e0) Stream removed, broadcasting: 1\nI0828 04:39:26.023998 3110 log.go:172] (0x4000adec60) Go away received\nI0828 04:39:26.027864 3110 log.go:172] (0x4000adec60) (0x400072c1e0) Stream removed, broadcasting: 1\nI0828 04:39:26.028198 3110 log.go:172] (0x4000adec60) (0x40007d6000) Stream removed, broadcasting: 3\nI0828 04:39:26.029439 3110 log.go:172] (0x4000adec60) (0x400072c280) Stream removed, broadcasting: 5\n" Aug 28 04:39:26.041: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 04:39:26.042: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 04:39:26.047: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 28 04:39:36.066: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 28 04:39:36.066: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 04:39:36.085: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 04:39:36.087: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC }] Aug 28 04:39:36.087: INFO: Aug 28 04:39:36.087: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 28 04:39:37.096: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992140144s Aug 28 04:39:38.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982929927s Aug 28 04:39:39.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.767591811s Aug 28 04:39:40.385: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.759740634s Aug 28 04:39:41.393: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.693879418s Aug 28 04:39:42.468: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.686764276s Aug 28 04:39:43.477: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.610824975s Aug 28 04:39:44.576: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.601922974s Aug 28 04:39:45.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 503.032047ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5628 Aug 28 04:39:46.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5628 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:39:48.077: INFO: stderr: "I0828 04:39:47.933415 3132 log.go:172] (0x4000116370) (0x40008d8000) Create stream\nI0828 04:39:47.939619 3132 log.go:172] (0x4000116370) (0x40008d8000) Stream added, broadcasting: 1\nI0828 04:39:47.955055 3132 log.go:172] (0x4000116370) Reply frame received for 1\nI0828 04:39:47.955665 3132 log.go:172] (0x4000116370) (0x40007efb80) Create stream\nI0828 04:39:47.955731 3132 log.go:172] (0x4000116370) (0x40007efb80) Stream added, broadcasting: 3\nI0828 04:39:47.957893 3132 log.go:172] (0x4000116370) Reply frame received for 3\nI0828 04:39:47.958331 3132 log.go:172] (0x4000116370) (0x40008d80a0) Create stream\nI0828 04:39:47.958439 3132 log.go:172] (0x4000116370) (0x40008d80a0) Stream added, broadcasting: 5\nI0828 04:39:47.960681 3132 log.go:172] (0x4000116370) Reply frame received for 5\nI0828 04:39:48.053817 3132 log.go:172] (0x4000116370) Data frame received for 3\nI0828 04:39:48.054228 3132 log.go:172] (0x4000116370) Data frame received for 5\nI0828 04:39:48.054446 3132 log.go:172] (0x4000116370) Data frame received for 1\nI0828 04:39:48.054608 3132 log.go:172] (0x40008d8000) (1) Data frame handling\nI0828 04:39:48.054766 3132 log.go:172] (0x40008d80a0) (5) Data frame handling\nI0828 04:39:48.054966 3132 log.go:172] (0x40007efb80) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 04:39:48.056335 3132 log.go:172] (0x40007efb80) (3) Data frame sent\nI0828 04:39:48.056411 3132 log.go:172] (0x40008d80a0) (5) Data frame sent\nI0828 04:39:48.056496 3132 log.go:172] (0x40008d8000) (1) Data frame sent\nI0828 04:39:48.056980 3132 log.go:172] (0x4000116370) Data frame received for 3\nI0828 04:39:48.057062 3132 log.go:172] (0x40007efb80) (3) Data frame handling\nI0828 04:39:48.059167 3132 log.go:172] (0x4000116370) Data frame received for 5\nI0828 04:39:48.059645 3132 log.go:172] (0x4000116370) (0x40008d8000) Stream removed, broadcasting: 1\nI0828 04:39:48.060262 3132 log.go:172] (0x40008d80a0) (5) Data frame handling\nI0828 04:39:48.060568 3132 log.go:172] (0x4000116370) Go away received\nI0828 04:39:48.064247 3132 log.go:172] (0x4000116370) (0x40008d8000) Stream removed, broadcasting: 1\nI0828 04:39:48.064525 3132 log.go:172] (0x4000116370) (0x40007efb80) Stream removed, broadcasting: 3\nI0828 04:39:48.064805 3132 log.go:172] (0x4000116370) (0x40008d80a0) Stream removed, broadcasting: 5\n" Aug 28 04:39:48.078: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 28 04:39:48.078: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 28 04:39:48.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5628 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:39:49.574: INFO: stderr: "I0828 04:39:49.425440 3156 log.go:172] (0x4000ac4000) (0x40006e19a0) Create stream\nI0828 04:39:49.428995 3156 log.go:172] (0x4000ac4000) (0x40006e19a0) Stream added, broadcasting: 1\nI0828 04:39:49.437263 3156 log.go:172] (0x4000ac4000) Reply frame received for 1\nI0828 04:39:49.437789 3156 log.go:172] (0x4000ac4000) (0x40006ce000) Create stream\nI0828 04:39:49.437846 3156 log.go:172] (0x4000ac4000) (0x40006ce000) Stream added, broadcasting: 3\nI0828 04:39:49.439245 3156 log.go:172] (0x4000ac4000) Reply frame received for 3\nI0828 04:39:49.439460 3156 log.go:172] (0x4000ac4000) (0x40006e1b80) Create stream\nI0828 04:39:49.439513 3156 log.go:172] (0x4000ac4000) (0x40006e1b80) Stream added, broadcasting: 5\nI0828 04:39:49.441124 3156 log.go:172] (0x4000ac4000) Reply frame received for 5\nI0828 04:39:49.552458 3156 log.go:172] (0x4000ac4000) Data frame received for 5\nI0828 04:39:49.552799 3156 log.go:172] (0x4000ac4000) Data frame received for 1\nI0828 04:39:49.553120 3156 log.go:172] (0x4000ac4000) Data frame received for 3\nI0828 04:39:49.553250 3156 log.go:172] (0x40006ce000) (3) Data frame handling\nI0828 04:39:49.553398 3156 log.go:172] (0x40006e19a0) (1) Data frame handling\nI0828 04:39:49.553639 3156 log.go:172] (0x40006e1b80) (5) Data frame handling\nI0828 04:39:49.555026 3156 log.go:172] (0x40006ce000) (3) Data frame sent\nI0828 04:39:49.555288 3156 log.go:172] (0x40006e19a0) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0828 04:39:49.555506 3156 log.go:172] (0x40006e1b80) (5) Data frame sent\nI0828 04:39:49.555779 3156 log.go:172] (0x4000ac4000) Data frame received for 5\nI0828 04:39:49.555923 3156 log.go:172] (0x40006e1b80) (5) Data frame handling\nI0828 04:39:49.556052 3156 log.go:172] (0x4000ac4000) Data frame received for 3\nI0828 04:39:49.556625 3156 log.go:172] (0x4000ac4000) (0x40006e19a0) Stream removed, broadcasting: 1\nI0828 04:39:49.557631 3156 log.go:172] (0x40006ce000) (3) Data frame handling\nI0828 04:39:49.561870 3156 log.go:172] (0x4000ac4000) (0x40006e19a0) Stream removed, broadcasting: 1\nI0828 04:39:49.562239 3156 log.go:172] (0x4000ac4000) (0x40006ce000) Stream removed, broadcasting: 3\nI0828 04:39:49.563067 3156 log.go:172] (0x4000ac4000) (0x40006e1b80) Stream removed, broadcasting: 5\n" Aug 28 04:39:49.575: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 28 04:39:49.575: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 28 04:39:49.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5628 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 28 04:39:51.029: INFO: stderr: "I0828 04:39:50.897904 3181 log.go:172] (0x4000a1a000) (0x40008f2000) Create stream\nI0828 04:39:50.904156 3181 log.go:172] (0x4000a1a000) (0x40008f2000) Stream added, broadcasting: 1\nI0828 04:39:50.916240 3181 log.go:172] (0x4000a1a000) Reply frame received for 1\nI0828 04:39:50.916932 3181 log.go:172] (0x4000a1a000) (0x40007dfb80) Create stream\nI0828 04:39:50.916997 3181 log.go:172] (0x4000a1a000) (0x40007dfb80) Stream added, broadcasting: 3\nI0828 04:39:50.918427 3181 log.go:172] (0x4000a1a000) Reply frame received for 3\nI0828 04:39:50.918737 3181 log.go:172] (0x4000a1a000) (0x400064c000) Create stream\nI0828 04:39:50.918806 3181 log.go:172] (0x4000a1a000) (0x400064c000) Stream added, broadcasting: 5\nI0828 04:39:50.920013 3181 log.go:172] (0x4000a1a000) Reply frame received for 5\nI0828 04:39:51.000175 3181 log.go:172] (0x4000a1a000) Data frame received for 3\nI0828 04:39:51.000796 3181 log.go:172] (0x4000a1a000) Data frame received for 5\nI0828 04:39:51.000944 3181 log.go:172] (0x400064c000) (5) Data frame handling\nI0828 04:39:51.001049 3181 log.go:172] (0x40007dfb80) (3) Data frame handling\nI0828 04:39:51.001201 3181 log.go:172] (0x4000a1a000) Data frame received for 1\nI0828 04:39:51.001296 3181 log.go:172] (0x40008f2000) (1) Data frame handling\nI0828 04:39:51.001858 3181 log.go:172] (0x400064c000) (5) Data frame sent\nI0828 04:39:51.002045 3181 log.go:172] (0x40008f2000) (1) Data frame sent\nI0828 04:39:51.002212 3181 log.go:172] (0x4000a1a000) Data frame received for 5\nI0828 04:39:51.002324 3181 log.go:172] (0x400064c000) (5) Data frame handling\nI0828 04:39:51.002524 3181 log.go:172] (0x40007dfb80) (3) Data frame sent\nI0828 04:39:51.002604 3181 log.go:172] (0x4000a1a000) Data frame received for 3\nI0828 04:39:51.002662 3181 log.go:172] (0x40007dfb80) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0828 04:39:51.005782 3181 log.go:172] (0x4000a1a000) (0x40008f2000) Stream removed, broadcasting: 1\nI0828 04:39:51.008423 3181 log.go:172] (0x4000a1a000) Go away received\nI0828 04:39:51.011102 3181 log.go:172] (0x4000a1a000) (0x40008f2000) Stream removed, broadcasting: 1\nI0828 04:39:51.011821 3181 log.go:172] (0x4000a1a000) (0x40007dfb80) Stream removed, broadcasting: 3\nI0828 04:39:51.012080 3181 log.go:172] (0x4000a1a000) (0x400064c000) Stream removed, broadcasting: 5\n" Aug 28 04:39:51.030: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 28 04:39:51.030: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 28 04:39:51.037: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 28 04:39:51.037: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 28 04:39:51.037: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 28 04:39:51.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5628 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 04:39:52.540: INFO: stderr: "I0828 04:39:52.414744 3203 log.go:172] (0x4000a1a0b0) (0x40009e0000) Create stream\nI0828 04:39:52.418692 3203 log.go:172] (0x4000a1a0b0) (0x40009e0000) Stream added, broadcasting: 1\nI0828 04:39:52.432417 3203 log.go:172] (0x4000a1a0b0) Reply frame received for 1\nI0828 04:39:52.433046 3203 log.go:172] (0x4000a1a0b0) (0x40009e00a0) Create stream\nI0828 04:39:52.433117 3203 log.go:172] (0x4000a1a0b0) (0x40009e00a0) Stream added, broadcasting: 3\nI0828 04:39:52.434635 3203 log.go:172] (0x4000a1a0b0) Reply frame received for 3\nI0828 04:39:52.435025 3203 log.go:172] (0x4000a1a0b0) (0x40009e01e0) Create stream\nI0828 04:39:52.435128 3203 log.go:172] (0x4000a1a0b0) (0x40009e01e0) Stream added, broadcasting: 5\nI0828 04:39:52.436626 3203 log.go:172] (0x4000a1a0b0) Reply frame received for 5\nI0828 04:39:52.524609 3203 log.go:172] (0x4000a1a0b0) Data frame received for 5\nI0828 04:39:52.525013 3203 log.go:172] (0x40009e01e0) (5) Data frame handling\nI0828 04:39:52.525190 3203 log.go:172] (0x4000a1a0b0) Data frame received for 3\nI0828 04:39:52.525344 3203 log.go:172] (0x40009e00a0) (3) Data frame handling\nI0828 04:39:52.525440 3203 log.go:172] (0x40009e01e0) (5) Data frame sent\nI0828 04:39:52.525615 3203 log.go:172] (0x40009e00a0) (3) Data frame sent\nI0828 04:39:52.525721 3203 log.go:172] (0x4000a1a0b0) Data frame received for 3\nI0828 04:39:52.525806 3203 log.go:172] (0x40009e00a0) (3) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 04:39:52.526130 3203 log.go:172] (0x4000a1a0b0) Data frame received for 5\nI0828 04:39:52.526258 3203 log.go:172] (0x40009e01e0) (5) Data frame handling\nI0828 04:39:52.526478 3203 log.go:172] (0x4000a1a0b0) Data frame received for 1\nI0828 04:39:52.526578 3203 log.go:172] (0x40009e0000) (1) Data frame handling\nI0828 04:39:52.526690 3203 log.go:172] (0x40009e0000) (1) Data frame sent\nI0828 04:39:52.528146 3203 log.go:172] (0x4000a1a0b0) (0x40009e0000) Stream removed, broadcasting: 1\nI0828 04:39:52.531136 3203 log.go:172] (0x4000a1a0b0) Go away received\nI0828 04:39:52.533870 3203 log.go:172] (0x4000a1a0b0) (0x40009e0000) Stream removed, broadcasting: 1\nI0828 04:39:52.534244 3203 log.go:172] (0x4000a1a0b0) (0x40009e00a0) Stream removed, broadcasting: 3\nI0828 04:39:52.534508 3203 log.go:172] (0x4000a1a0b0) (0x40009e01e0) Stream removed, broadcasting: 5\n" Aug 28 04:39:52.541: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 04:39:52.541: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 04:39:52.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5628 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 04:39:54.044: INFO: stderr: "I0828 04:39:53.886192 3225 log.go:172] (0x4000120370) (0x4000b0a0a0) Create stream\nI0828 04:39:53.889152 3225 log.go:172] (0x4000120370) (0x4000b0a0a0) Stream added, broadcasting: 1\nI0828 04:39:53.898693 3225 log.go:172] (0x4000120370) Reply frame received for 1\nI0828 04:39:53.899261 3225 log.go:172] (0x4000120370) (0x4000837ae0) Create stream\nI0828 04:39:53.899324 3225 log.go:172] (0x4000120370) (0x4000837ae0) Stream added, broadcasting: 3\nI0828 04:39:53.900619 3225 log.go:172] (0x4000120370) Reply frame received for 3\nI0828 04:39:53.900903 3225 log.go:172] (0x4000120370) (0x4000b0a1e0) Create stream\nI0828 04:39:53.900957 3225 log.go:172] (0x4000120370) (0x4000b0a1e0) Stream added, broadcasting: 5\nI0828 04:39:53.902978 3225 log.go:172] (0x4000120370) Reply frame received for 5\nI0828 04:39:53.996106 3225 log.go:172] (0x4000120370) Data frame received for 5\nI0828 04:39:53.996317 3225 log.go:172] (0x4000b0a1e0) (5) Data frame handling\nI0828 04:39:53.996673 3225 log.go:172] (0x4000b0a1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 04:39:54.022133 3225 log.go:172] (0x4000120370) Data frame received for 3\nI0828 04:39:54.022274 3225 log.go:172] (0x4000837ae0) (3) Data frame handling\nI0828 04:39:54.022409 3225 log.go:172] (0x4000120370) Data frame received for 5\nI0828 04:39:54.022535 3225 log.go:172] (0x4000b0a1e0) (5) Data frame handling\nI0828 04:39:54.022706 3225 log.go:172] (0x4000837ae0) (3) Data frame sent\nI0828 04:39:54.022798 3225 log.go:172] (0x4000120370) Data frame received for 3\nI0828 04:39:54.022859 3225 log.go:172] (0x4000837ae0) (3) Data frame handling\nI0828 04:39:54.023058 3225 log.go:172] (0x4000120370) Data frame received for 1\nI0828 04:39:54.023168 3225 log.go:172] (0x4000b0a0a0) (1) Data frame handling\nI0828 04:39:54.023286 3225 log.go:172] (0x4000b0a0a0) (1) Data frame sent\nI0828 04:39:54.026083 3225 log.go:172] (0x4000120370) (0x4000b0a0a0) Stream removed, broadcasting: 1\nI0828 04:39:54.027437 3225 log.go:172] (0x4000120370) Go away received\nI0828 04:39:54.032198 3225 log.go:172] (0x4000120370) (0x4000b0a0a0) Stream removed, broadcasting: 1\nI0828 04:39:54.032972 3225 log.go:172] (0x4000120370) (0x4000837ae0) Stream removed, broadcasting: 3\nI0828 04:39:54.033458 3225 log.go:172] (0x4000120370) (0x4000b0a1e0) Stream removed, broadcasting: 5\n" Aug 28 04:39:54.045: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 04:39:54.045: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 04:39:54.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5628 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 28 04:39:55.554: INFO: stderr: "I0828 04:39:55.410541 3248 log.go:172] (0x4000ae2bb0) (0x40007221e0) Create stream\nI0828 04:39:55.414648 3248 log.go:172] (0x4000ae2bb0) (0x40007221e0) Stream added, broadcasting: 1\nI0828 04:39:55.425165 3248 log.go:172] (0x4000ae2bb0) Reply frame received for 1\nI0828 04:39:55.425721 3248 log.go:172] (0x4000ae2bb0) (0x40007f2000) Create stream\nI0828 04:39:55.425780 3248 log.go:172] (0x4000ae2bb0) (0x40007f2000) Stream added, broadcasting: 3\nI0828 04:39:55.427385 3248 log.go:172] (0x4000ae2bb0) Reply frame received for 3\nI0828 04:39:55.427790 3248 log.go:172] (0x4000ae2bb0) (0x4000722280) Create stream\nI0828 04:39:55.427888 3248 log.go:172] (0x4000ae2bb0) (0x4000722280) Stream added, broadcasting: 5\nI0828 04:39:55.429728 3248 log.go:172] (0x4000ae2bb0) Reply frame received for 5\nI0828 04:39:55.497162 3248 log.go:172] (0x4000ae2bb0) Data frame received for 5\nI0828 04:39:55.497388 3248 log.go:172] (0x4000722280) (5) Data frame handling\nI0828 04:39:55.497843 3248 log.go:172] (0x4000722280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 04:39:55.532561 3248 log.go:172] (0x4000ae2bb0) Data frame received for 3\nI0828 04:39:55.532687 3248 log.go:172] (0x4000ae2bb0) Data frame received for 5\nI0828 04:39:55.532860 3248 log.go:172] (0x4000722280) (5) Data frame handling\nI0828 04:39:55.533174 3248 log.go:172] (0x40007f2000) (3) Data frame handling\nI0828 04:39:55.533385 3248 log.go:172] (0x40007f2000) (3) Data frame sent\nI0828 04:39:55.533561 3248 log.go:172] (0x4000ae2bb0) Data frame received for 3\nI0828 04:39:55.533707 3248 log.go:172] (0x40007f2000) (3) Data frame handling\nI0828 04:39:55.534682 3248 log.go:172] (0x4000ae2bb0) Data frame received for 1\nI0828 04:39:55.534795 3248 log.go:172] (0x40007221e0) (1) Data frame handling\nI0828 04:39:55.534882 3248 log.go:172] (0x40007221e0) (1) Data frame sent\nI0828 04:39:55.536313 3248 log.go:172] (0x4000ae2bb0) (0x40007221e0) Stream removed, broadcasting: 1\nI0828 04:39:55.540315 3248 log.go:172] (0x4000ae2bb0) Go away received\nI0828 04:39:55.544622 3248 log.go:172] (0x4000ae2bb0) (0x40007221e0) Stream removed, broadcasting: 1\nI0828 04:39:55.545087 3248 log.go:172] (0x4000ae2bb0) (0x40007f2000) Stream removed, broadcasting: 3\nI0828 04:39:55.545337 3248 log.go:172] (0x4000ae2bb0) (0x4000722280) Stream removed, broadcasting: 5\n" Aug 28 04:39:55.555: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 28 04:39:55.556: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 28 04:39:55.556: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 04:39:55.562: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 28 04:40:05.577: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 28 04:40:05.577: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 28 04:40:05.577: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 28 04:40:05.625: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 04:40:05.625: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC }] Aug 28 04:40:05.625: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:05.626: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:05.626: INFO: Aug 28 04:40:05.626: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 28 04:40:06.768: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 04:40:06.768: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC }] Aug 28 04:40:06.769: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:06.769: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:06.769: INFO: Aug 28 04:40:06.769: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 28 04:40:07.778: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 04:40:07.778: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC }] Aug 28 04:40:07.778: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:07.778: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:07.778: INFO: Aug 28 04:40:07.778: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 28 04:40:08.787: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 04:40:08.787: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC }] Aug 28 04:40:08.787: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:08.788: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:08.788: INFO: Aug 28 04:40:08.788: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 28 04:40:09.797: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 04:40:09.797: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC }] Aug 28 04:40:09.798: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:09.798: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:09.799: INFO: Aug 28 04:40:09.799: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 28 04:40:10.807: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 04:40:10.808: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:14 +0000 UTC }] Aug 28 04:40:10.808: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:10.809: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 04:39:36 +0000 UTC }] Aug 28 04:40:10.809: INFO: Aug 28 04:40:10.809: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 28 04:40:11.814: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.777257746s Aug 28 04:40:12.820: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.771822517s Aug 28 04:40:13.830: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.765548257s Aug 28 04:40:14.836: INFO: Verifying statefulset ss doesn't scale past 0 for another 756.260022ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5628 Aug 28 04:40:15.841: INFO: Scaling statefulset ss to 0 Aug 28 04:40:15.852: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 28 04:40:15.855: INFO: Deleting all statefulset in ns statefulset-5628 Aug 28 04:40:15.858: INFO: Scaling statefulset ss to 0 Aug 28 04:40:15.868: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 04:40:15.870: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:40:15.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5628" for this suite. • [SLOW TEST:61.588 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":131,"skipped":2260,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:40:15.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Aug 28 04:40:16.218: INFO: Waiting up to 5m0s for pod "client-containers-be7669d0-cd55-49f8-a01b-efe1ce91d801" in namespace "containers-7050" to be "success or failure" Aug 28 04:40:16.239: INFO: Pod "client-containers-be7669d0-cd55-49f8-a01b-efe1ce91d801": Phase="Pending", Reason="", readiness=false. Elapsed: 20.706852ms Aug 28 04:40:18.245: INFO: Pod "client-containers-be7669d0-cd55-49f8-a01b-efe1ce91d801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027187308s Aug 28 04:40:20.252: INFO: Pod "client-containers-be7669d0-cd55-49f8-a01b-efe1ce91d801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033645879s STEP: Saw pod success Aug 28 04:40:20.252: INFO: Pod "client-containers-be7669d0-cd55-49f8-a01b-efe1ce91d801" satisfied condition "success or failure" Aug 28 04:40:20.257: INFO: Trying to get logs from node jerma-worker pod client-containers-be7669d0-cd55-49f8-a01b-efe1ce91d801 container test-container: STEP: delete the pod Aug 28 04:40:20.312: INFO: Waiting for pod client-containers-be7669d0-cd55-49f8-a01b-efe1ce91d801 to disappear Aug 28 04:40:20.322: INFO: Pod client-containers-be7669d0-cd55-49f8-a01b-efe1ce91d801 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:40:20.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7050" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2272,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:40:20.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 28 04:40:20.411: INFO: Waiting up to 5m0s for pod "pod-95282816-4038-4ef3-a610-2372aa0541c9" in namespace "emptydir-9195" to be "success or failure" Aug 28 04:40:20.466: INFO: Pod "pod-95282816-4038-4ef3-a610-2372aa0541c9": Phase="Pending", Reason="", readiness=false. Elapsed: 54.79235ms Aug 28 04:40:22.569: INFO: Pod "pod-95282816-4038-4ef3-a610-2372aa0541c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157776719s Aug 28 04:40:24.584: INFO: Pod "pod-95282816-4038-4ef3-a610-2372aa0541c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172585558s Aug 28 04:40:26.875: INFO: Pod "pod-95282816-4038-4ef3-a610-2372aa0541c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463538612s Aug 28 04:40:28.881: INFO: Pod "pod-95282816-4038-4ef3-a610-2372aa0541c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470138632s Aug 28 04:40:30.888: INFO: Pod "pod-95282816-4038-4ef3-a610-2372aa0541c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.47685158s STEP: Saw pod success Aug 28 04:40:30.888: INFO: Pod "pod-95282816-4038-4ef3-a610-2372aa0541c9" satisfied condition "success or failure" Aug 28 04:40:30.894: INFO: Trying to get logs from node jerma-worker pod pod-95282816-4038-4ef3-a610-2372aa0541c9 container test-container: STEP: delete the pod Aug 28 04:40:30.920: INFO: Waiting for pod pod-95282816-4038-4ef3-a610-2372aa0541c9 to disappear Aug 28 04:40:30.924: INFO: Pod pod-95282816-4038-4ef3-a610-2372aa0541c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:40:30.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9195" for this suite. • [SLOW TEST:10.600 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2278,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:40:30.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:40:33.378: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:40:35.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186433, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186433, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186433, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186433, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 04:40:37.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186433, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186433, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186433, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186433, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:40:40.474: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 28 04:40:44.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8797 to-be-attached-pod -i -c=container1' Aug 28 04:40:45.938: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:40:45.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8797" for this suite. STEP: Destroying namespace "webhook-8797-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.214 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":134,"skipped":2289,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:40:46.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:40:51.076: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:40:53.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186451, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186451, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186451, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186451, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:40:56.138: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:40:56.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7565" for this suite. STEP: Destroying namespace "webhook-7565-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.244 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":135,"skipped":2292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:40:56.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-a56210ac-05b4-4d16-b4a9-e447343f6c3a STEP: Creating a pod to test consume secrets Aug 28 04:40:56.512: INFO: Waiting up to 5m0s for pod "pod-secrets-db0e90df-1287-421a-8d76-39a705fc20ee" in namespace "secrets-8571" to be "success or failure" Aug 28 04:40:56.522: INFO: Pod "pod-secrets-db0e90df-1287-421a-8d76-39a705fc20ee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.394203ms Aug 28 04:40:58.529: INFO: Pod "pod-secrets-db0e90df-1287-421a-8d76-39a705fc20ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017382477s Aug 28 04:41:00.537: INFO: Pod "pod-secrets-db0e90df-1287-421a-8d76-39a705fc20ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025005374s STEP: Saw pod success Aug 28 04:41:00.537: INFO: Pod "pod-secrets-db0e90df-1287-421a-8d76-39a705fc20ee" satisfied condition "success or failure" Aug 28 04:41:00.542: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-db0e90df-1287-421a-8d76-39a705fc20ee container secret-volume-test: STEP: delete the pod Aug 28 04:41:00.781: INFO: Waiting for pod pod-secrets-db0e90df-1287-421a-8d76-39a705fc20ee to disappear Aug 28 04:41:00.893: INFO: Pod pod-secrets-db0e90df-1287-421a-8d76-39a705fc20ee no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:41:00.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8571" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2334,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:41:00.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:41:01.315: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:41:02.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3713" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":137,"skipped":2350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:41:02.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:41:03.234: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-acefeb98-e46d-4c22-a0cd-28a27e7371a1" in namespace "security-context-test-8662" to be "success or failure" Aug 28 04:41:03.337: INFO: Pod "busybox-readonly-false-acefeb98-e46d-4c22-a0cd-28a27e7371a1": Phase="Pending", Reason="", readiness=false. Elapsed: 103.293833ms Aug 28 04:41:05.359: INFO: Pod "busybox-readonly-false-acefeb98-e46d-4c22-a0cd-28a27e7371a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125004317s Aug 28 04:41:07.364: INFO: Pod "busybox-readonly-false-acefeb98-e46d-4c22-a0cd-28a27e7371a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130561519s Aug 28 04:41:09.534: INFO: Pod "busybox-readonly-false-acefeb98-e46d-4c22-a0cd-28a27e7371a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.299746145s Aug 28 04:41:09.534: INFO: Pod "busybox-readonly-false-acefeb98-e46d-4c22-a0cd-28a27e7371a1" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:41:09.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8662" for this suite. • [SLOW TEST:6.631 seconds] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2381,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:41:09.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 28 04:41:10.138: INFO: Waiting up to 5m0s for pod "pod-b01b41f5-df40-44c7-b2da-755e7dc07b43" in namespace "emptydir-8309" to be "success or failure" Aug 28 04:41:10.317: INFO: Pod "pod-b01b41f5-df40-44c7-b2da-755e7dc07b43": Phase="Pending", Reason="", readiness=false. Elapsed: 179.139165ms Aug 28 04:41:12.323: INFO: Pod "pod-b01b41f5-df40-44c7-b2da-755e7dc07b43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185335167s Aug 28 04:41:14.329: INFO: Pod "pod-b01b41f5-df40-44c7-b2da-755e7dc07b43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191071649s Aug 28 04:41:16.336: INFO: Pod "pod-b01b41f5-df40-44c7-b2da-755e7dc07b43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.197388102s STEP: Saw pod success Aug 28 04:41:16.336: INFO: Pod "pod-b01b41f5-df40-44c7-b2da-755e7dc07b43" satisfied condition "success or failure" Aug 28 04:41:16.341: INFO: Trying to get logs from node jerma-worker pod pod-b01b41f5-df40-44c7-b2da-755e7dc07b43 container test-container: STEP: delete the pod Aug 28 04:41:16.398: INFO: Waiting for pod pod-b01b41f5-df40-44c7-b2da-755e7dc07b43 to disappear Aug 28 04:41:16.414: INFO: Pod pod-b01b41f5-df40-44c7-b2da-755e7dc07b43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:41:16.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8309" for this suite. • [SLOW TEST:6.901 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2384,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:41:16.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6696/configmap-test-f0c74d83-c7d4-497c-bcf1-3918d296b86b STEP: Creating a pod to test consume configMaps Aug 28 04:41:16.538: INFO: Waiting up to 5m0s for pod "pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52" in namespace "configmap-6696" to be "success or failure" Aug 28 04:41:16.581: INFO: Pod "pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52": Phase="Pending", Reason="", readiness=false. Elapsed: 42.592149ms Aug 28 04:41:18.635: INFO: Pod "pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096960702s Aug 28 04:41:20.642: INFO: Pod "pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103873822s Aug 28 04:41:22.649: INFO: Pod "pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110612073s STEP: Saw pod success Aug 28 04:41:22.649: INFO: Pod "pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52" satisfied condition "success or failure" Aug 28 04:41:22.653: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52 container env-test: STEP: delete the pod Aug 28 04:41:22.680: INFO: Waiting for pod pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52 to disappear Aug 28 04:41:22.686: INFO: Pod pod-configmaps-f57d5c2f-2f5f-4ec3-9a9f-3885cca5af52 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:41:22.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6696" for this suite. • [SLOW TEST:6.267 seconds] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2397,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:41:22.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 28 04:41:22.799: INFO: Waiting up to 5m0s for pod "pod-e1688780-0119-4a03-8b6b-ed9a53086ced" in namespace "emptydir-1733" to be "success or failure" Aug 28 04:41:22.875: INFO: Pod "pod-e1688780-0119-4a03-8b6b-ed9a53086ced": Phase="Pending", Reason="", readiness=false. Elapsed: 75.26899ms Aug 28 04:41:24.881: INFO: Pod "pod-e1688780-0119-4a03-8b6b-ed9a53086ced": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081640587s Aug 28 04:41:26.887: INFO: Pod "pod-e1688780-0119-4a03-8b6b-ed9a53086ced": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087926502s STEP: Saw pod success Aug 28 04:41:26.888: INFO: Pod "pod-e1688780-0119-4a03-8b6b-ed9a53086ced" satisfied condition "success or failure" Aug 28 04:41:26.891: INFO: Trying to get logs from node jerma-worker pod pod-e1688780-0119-4a03-8b6b-ed9a53086ced container test-container: STEP: delete the pod Aug 28 04:41:26.914: INFO: Waiting for pod pod-e1688780-0119-4a03-8b6b-ed9a53086ced to disappear Aug 28 04:41:26.969: INFO: Pod pod-e1688780-0119-4a03-8b6b-ed9a53086ced no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:41:26.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1733" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2404,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:41:26.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2983 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 28 04:41:27.259: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 28 04:41:53.574: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.12:8080/dial?request=hostname&protocol=http&host=10.244.2.124&port=8080&tries=1'] Namespace:pod-network-test-2983 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:41:53.575: INFO: >>> kubeConfig: /root/.kube/config I0828 04:41:53.639387 8 log.go:172] (0x40029a5ad0) (0x4001721ea0) Create stream I0828 04:41:53.639571 8 log.go:172] (0x40029a5ad0) (0x4001721ea0) Stream added, broadcasting: 1 I0828 04:41:53.643192 8 log.go:172] (0x40029a5ad0) Reply frame received for 1 I0828 04:41:53.643478 8 log.go:172] (0x40029a5ad0) (0x4001e2a6e0) Create stream I0828 04:41:53.643618 8 log.go:172] (0x40029a5ad0) (0x4001e2a6e0) Stream added, broadcasting: 3 I0828 04:41:53.645821 8 log.go:172] (0x40029a5ad0) Reply frame received for 3 I0828 04:41:53.646042 8 log.go:172] (0x40029a5ad0) (0x4000f265a0) Create stream I0828 04:41:53.646168 8 log.go:172] (0x40029a5ad0) (0x4000f265a0) Stream added, broadcasting: 5 I0828 04:41:53.647901 8 log.go:172] (0x40029a5ad0) Reply frame received for 5 I0828 04:41:53.733537 8 log.go:172] (0x40029a5ad0) Data frame received for 3 I0828 04:41:53.733681 8 log.go:172] (0x4001e2a6e0) (3) Data frame handling I0828 04:41:53.733840 8 log.go:172] (0x4001e2a6e0) (3) Data frame sent I0828 04:41:53.733991 8 log.go:172] (0x40029a5ad0) Data frame received for 5 I0828 04:41:53.734136 8 log.go:172] (0x4000f265a0) (5) Data frame handling I0828 04:41:53.734244 8 log.go:172] (0x40029a5ad0) Data frame received for 3 I0828 04:41:53.734351 8 log.go:172] (0x4001e2a6e0) (3) Data frame handling I0828 04:41:53.735118 8 log.go:172] (0x40029a5ad0) Data frame received for 1 I0828 04:41:53.735253 8 log.go:172] (0x4001721ea0) (1) Data frame handling I0828 04:41:53.735352 8 log.go:172] (0x4001721ea0) (1) Data frame sent I0828 04:41:53.735443 8 log.go:172] (0x40029a5ad0) (0x4001721ea0) Stream removed, broadcasting: 1 I0828 04:41:53.735561 8 log.go:172] (0x40029a5ad0) Go away received I0828 04:41:53.735827 8 log.go:172] (0x40029a5ad0) (0x4001721ea0) Stream removed, broadcasting: 1 I0828 04:41:53.735994 8 log.go:172] (0x40029a5ad0) (0x4001e2a6e0) Stream removed, broadcasting: 3 I0828 04:41:53.736093 8 log.go:172] (0x40029a5ad0) (0x4000f265a0) Stream removed, broadcasting: 5 Aug 28 04:41:53.737: INFO: Waiting for responses: map[] Aug 28 04:41:53.743: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.12:8080/dial?request=hostname&protocol=http&host=10.244.1.11&port=8080&tries=1'] Namespace:pod-network-test-2983 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:41:53.743: INFO: >>> kubeConfig: /root/.kube/config I0828 04:41:53.806146 8 log.go:172] (0x4002cae4d0) (0x4000f270e0) Create stream I0828 04:41:53.806316 8 log.go:172] (0x4002cae4d0) (0x4000f270e0) Stream added, broadcasting: 1 I0828 04:41:53.809638 8 log.go:172] (0x4002cae4d0) Reply frame received for 1 I0828 04:41:53.809929 8 log.go:172] (0x4002cae4d0) (0x4001a1c320) Create stream I0828 04:41:53.810118 8 log.go:172] (0x4002cae4d0) (0x4001a1c320) Stream added, broadcasting: 3 I0828 04:41:53.812002 8 log.go:172] (0x4002cae4d0) Reply frame received for 3 I0828 04:41:53.812162 8 log.go:172] (0x4002cae4d0) (0x400207c780) Create stream I0828 04:41:53.812242 8 log.go:172] (0x4002cae4d0) (0x400207c780) Stream added, broadcasting: 5 I0828 04:41:53.813935 8 log.go:172] (0x4002cae4d0) Reply frame received for 5 I0828 04:41:53.886056 8 log.go:172] (0x4002cae4d0) Data frame received for 3 I0828 04:41:53.886233 8 log.go:172] (0x4001a1c320) (3) Data frame handling I0828 04:41:53.886352 8 log.go:172] (0x4001a1c320) (3) Data frame sent I0828 04:41:53.886437 8 log.go:172] (0x4002cae4d0) Data frame received for 3 I0828 04:41:53.886533 8 log.go:172] (0x4001a1c320) (3) Data frame handling I0828 04:41:53.886612 8 log.go:172] (0x4002cae4d0) Data frame received for 5 I0828 04:41:53.886694 8 log.go:172] (0x400207c780) (5) Data frame handling I0828 04:41:53.888068 8 log.go:172] (0x4002cae4d0) Data frame received for 1 I0828 04:41:53.888146 8 log.go:172] (0x4000f270e0) (1) Data frame handling I0828 04:41:53.888229 8 log.go:172] (0x4000f270e0) (1) Data frame sent I0828 04:41:53.888324 8 log.go:172] (0x4002cae4d0) (0x4000f270e0) Stream removed, broadcasting: 1 I0828 04:41:53.888444 8 log.go:172] (0x4002cae4d0) Go away received I0828 04:41:53.888710 8 log.go:172] (0x4002cae4d0) (0x4000f270e0) Stream removed, broadcasting: 1 I0828 04:41:53.888954 8 log.go:172] (0x4002cae4d0) (0x4001a1c320) Stream removed, broadcasting: 3 I0828 04:41:53.889061 8 log.go:172] (0x4002cae4d0) (0x400207c780) Stream removed, broadcasting: 5 Aug 28 04:41:53.889: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:41:53.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2983" for this suite. • [SLOW TEST:26.921 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:41:53.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 28 04:41:59.693: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:42:00.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6518" for this suite. • [SLOW TEST:6.528 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:42:00.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 28 04:42:01.103: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Aug 28 04:42:04.434: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 28 04:42:06.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186524, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186524, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186524, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186524, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 04:42:09.572: INFO: Waited 631.918224ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:42:10.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6113" for this suite. • [SLOW TEST:9.677 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":144,"skipped":2470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:42:10.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2c47e4e0-9109-4c20-b8b9-9cff578367c7 STEP: Creating a pod to test consume configMaps Aug 28 04:42:10.640: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e9573dd-943b-4422-9fad-eefb98972e63" in namespace "projected-5318" to be "success or failure" Aug 28 04:42:10.858: INFO: Pod "pod-projected-configmaps-5e9573dd-943b-4422-9fad-eefb98972e63": Phase="Pending", Reason="", readiness=false. Elapsed: 217.932645ms Aug 28 04:42:12.865: INFO: Pod "pod-projected-configmaps-5e9573dd-943b-4422-9fad-eefb98972e63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224826169s Aug 28 04:42:14.872: INFO: Pod "pod-projected-configmaps-5e9573dd-943b-4422-9fad-eefb98972e63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231587807s STEP: Saw pod success Aug 28 04:42:14.872: INFO: Pod "pod-projected-configmaps-5e9573dd-943b-4422-9fad-eefb98972e63" satisfied condition "success or failure" Aug 28 04:42:14.877: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-5e9573dd-943b-4422-9fad-eefb98972e63 container projected-configmap-volume-test: STEP: delete the pod Aug 28 04:42:14.961: INFO: Waiting for pod pod-projected-configmaps-5e9573dd-943b-4422-9fad-eefb98972e63 to disappear Aug 28 04:42:15.049: INFO: Pod pod-projected-configmaps-5e9573dd-943b-4422-9fad-eefb98972e63 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:42:15.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5318" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2510,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:42:15.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-c5ed12de-09bc-457f-8ad8-af4a3b4d8356 STEP: Creating a pod to test consume secrets Aug 28 04:42:15.421: INFO: Waiting up to 5m0s for pod "pod-secrets-725bf0bd-244f-410a-9ff0-792e1a766f3e" in namespace "secrets-8759" to be "success or failure" Aug 28 04:42:15.514: INFO: Pod "pod-secrets-725bf0bd-244f-410a-9ff0-792e1a766f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 92.923739ms Aug 28 04:42:17.534: INFO: Pod "pod-secrets-725bf0bd-244f-410a-9ff0-792e1a766f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112713334s Aug 28 04:42:19.570: INFO: Pod "pod-secrets-725bf0bd-244f-410a-9ff0-792e1a766f3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149000181s STEP: Saw pod success Aug 28 04:42:19.571: INFO: Pod "pod-secrets-725bf0bd-244f-410a-9ff0-792e1a766f3e" satisfied condition "success or failure" Aug 28 04:42:19.578: INFO: Trying to get logs from node jerma-worker pod pod-secrets-725bf0bd-244f-410a-9ff0-792e1a766f3e container secret-volume-test: STEP: delete the pod Aug 28 04:42:19.600: INFO: Waiting for pod pod-secrets-725bf0bd-244f-410a-9ff0-792e1a766f3e to disappear Aug 28 04:42:19.666: INFO: Pod pod-secrets-725bf0bd-244f-410a-9ff0-792e1a766f3e no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:42:19.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8759" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:42:19.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:42:22.354: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:42:24.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186542, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186542, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186542, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186542, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:42:27.423: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:42:27.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3527" for this suite. STEP: Destroying namespace "webhook-3527-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.363 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":147,"skipped":2557,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:42:28.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 28 04:42:28.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36947df3-601c-4bfd-82cd-16e68c2e3519" in namespace "projected-3802" to be "success or failure" Aug 28 04:42:28.186: INFO: Pod "downwardapi-volume-36947df3-601c-4bfd-82cd-16e68c2e3519": Phase="Pending", Reason="", readiness=false. Elapsed: 8.576704ms Aug 28 04:42:30.194: INFO: Pod "downwardapi-volume-36947df3-601c-4bfd-82cd-16e68c2e3519": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016359757s Aug 28 04:42:32.201: INFO: Pod "downwardapi-volume-36947df3-601c-4bfd-82cd-16e68c2e3519": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023282027s STEP: Saw pod success Aug 28 04:42:32.201: INFO: Pod "downwardapi-volume-36947df3-601c-4bfd-82cd-16e68c2e3519" satisfied condition "success or failure" Aug 28 04:42:32.206: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-36947df3-601c-4bfd-82cd-16e68c2e3519 container client-container: STEP: delete the pod Aug 28 04:42:32.410: INFO: Waiting for pod downwardapi-volume-36947df3-601c-4bfd-82cd-16e68c2e3519 to disappear Aug 28 04:42:32.443: INFO: Pod downwardapi-volume-36947df3-601c-4bfd-82cd-16e68c2e3519 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:42:32.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3802" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2566,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:42:32.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:42:36.408: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:42:38.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186556, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186556, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186556, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186556, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 04:42:40.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186556, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186556, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186556, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186556, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:42:43.653: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:42:43.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7592-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:42:44.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4420" for this suite. STEP: Destroying namespace "webhook-4420-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.576 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":149,"skipped":2584,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:42:45.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526 [It] should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 28 04:42:45.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5252' Aug 28 04:42:49.701: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 28 04:42:49.701: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Aug 28 04:42:49.731: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-wdd77] Aug 28 04:42:49.731: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-wdd77" in namespace "kubectl-5252" to be "running and ready" Aug 28 04:42:49.735: INFO: Pod "e2e-test-httpd-rc-wdd77": Phase="Pending", Reason="", readiness=false. Elapsed: 3.783301ms Aug 28 04:42:51.767: INFO: Pod "e2e-test-httpd-rc-wdd77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036239707s Aug 28 04:42:53.773: INFO: Pod "e2e-test-httpd-rc-wdd77": Phase="Running", Reason="", readiness=true. Elapsed: 4.042328074s Aug 28 04:42:53.774: INFO: Pod "e2e-test-httpd-rc-wdd77" satisfied condition "running and ready" Aug 28 04:42:53.774: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-wdd77] Aug 28 04:42:53.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5252' Aug 28 04:42:55.125: INFO: stderr: "" Aug 28 04:42:55.125: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.131. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.131. Set the 'ServerName' directive globally to suppress this message\n[Fri Aug 28 04:42:52.362259 2020] [mpm_event:notice] [pid 1:tid 140369551846248] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Aug 28 04:42:52.362308 2020] [core:notice] [pid 1:tid 140369551846248] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531 Aug 28 04:42:55.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5252' Aug 28 04:42:56.404: INFO: stderr: "" Aug 28 04:42:56.405: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:42:56.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5252" for this suite. • [SLOW TEST:11.376 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":150,"skipped":2625,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:42:56.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0828 04:43:06.587850 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 28 04:43:06.588: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:43:06.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9440" for this suite. • [SLOW TEST:10.185 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":151,"skipped":2633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:43:06.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Aug 28 04:43:09.226: INFO: created pod pod-service-account-defaultsa Aug 28 04:43:09.227: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 28 04:43:09.308: INFO: created pod pod-service-account-mountsa Aug 28 04:43:09.308: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 28 04:43:09.463: INFO: created pod pod-service-account-nomountsa Aug 28 04:43:09.463: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 28 04:43:09.511: INFO: created pod pod-service-account-defaultsa-mountspec Aug 28 04:43:09.511: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 28 04:43:09.961: INFO: created pod pod-service-account-mountsa-mountspec Aug 28 04:43:09.961: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 28 04:43:09.973: INFO: created pod pod-service-account-nomountsa-mountspec Aug 28 04:43:09.973: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 28 04:43:10.040: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 28 04:43:10.040: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 28 04:43:10.227: INFO: created pod pod-service-account-mountsa-nomountspec Aug 28 04:43:10.227: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 28 04:43:10.276: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 28 04:43:10.276: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:43:10.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3838" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":152,"skipped":2670,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:43:10.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:43:11.757: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:43:14.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2290" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":153,"skipped":2673,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:43:15.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:43:17.040: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7c32ace6-0e84-4893-87a8-d46b5cef6a97", Controller:(*bool)(0x4003af6d7a), BlockOwnerDeletion:(*bool)(0x4003af6d7b)}} Aug 28 04:43:17.453: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"981f8cb0-7a6a-4680-b645-890d66757abf", Controller:(*bool)(0x4002d510f2), BlockOwnerDeletion:(*bool)(0x4002d510f3)}} Aug 28 04:43:17.497: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ed93dd58-9fa5-4336-94c8-57dde3ef118c", Controller:(*bool)(0x4002d512ca), BlockOwnerDeletion:(*bool)(0x4002d512cb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:43:28.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1562" for this suite. • [SLOW TEST:13.041 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":154,"skipped":2688,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:43:28.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5dc2b1d0-f2a0-4b4c-be18-bafcc55532b3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5dc2b1d0-f2a0-4b4c-be18-bafcc55532b3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:43:36.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-239" for this suite. • [SLOW TEST:8.430 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2696,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:43:36.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:43:36.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3693" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":156,"skipped":2700,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:43:36.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8661 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8661 I0828 04:43:37.035225 8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8661, replica count: 2 I0828 04:43:40.086633 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0828 04:43:43.087427 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 28 04:43:43.087: INFO: Creating new exec pod Aug 28 04:43:50.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8661 execpodq4qrt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 28 04:43:52.026: INFO: stderr: "I0828 04:43:51.905575 3372 log.go:172] (0x40002280b0) (0x400080dcc0) Create stream\nI0828 04:43:51.909705 3372 log.go:172] (0x40002280b0) (0x400080dcc0) Stream added, broadcasting: 1\nI0828 04:43:51.922301 3372 log.go:172] (0x40002280b0) Reply frame received for 1\nI0828 04:43:51.922847 3372 log.go:172] (0x40002280b0) (0x400080dd60) Create stream\nI0828 04:43:51.922904 3372 log.go:172] (0x40002280b0) (0x400080dd60) Stream added, broadcasting: 3\nI0828 04:43:51.924711 3372 log.go:172] (0x40002280b0) Reply frame received for 3\nI0828 04:43:51.925393 3372 log.go:172] (0x40002280b0) (0x4000776000) Create stream\nI0828 04:43:51.925505 3372 log.go:172] (0x40002280b0) (0x4000776000) Stream added, broadcasting: 5\nI0828 04:43:51.927458 3372 log.go:172] (0x40002280b0) Reply frame received for 5\nI0828 04:43:52.006155 3372 log.go:172] (0x40002280b0) Data frame received for 5\nI0828 04:43:52.006333 3372 log.go:172] (0x40002280b0) Data frame received for 3\nI0828 04:43:52.006563 3372 log.go:172] (0x400080dd60) (3) Data frame handling\nI0828 04:43:52.006814 3372 log.go:172] (0x4000776000) (5) Data frame handling\nI0828 04:43:52.008278 3372 log.go:172] (0x40002280b0) Data frame received for 1\nI0828 04:43:52.008377 3372 log.go:172] (0x400080dcc0) (1) Data frame handling\nI0828 04:43:52.009716 3372 log.go:172] (0x400080dcc0) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0828 04:43:52.010838 3372 log.go:172] (0x4000776000) (5) Data frame sent\nI0828 04:43:52.010929 3372 log.go:172] (0x40002280b0) Data frame received for 5\nI0828 04:43:52.011972 3372 log.go:172] (0x40002280b0) (0x400080dcc0) Stream removed, broadcasting: 1\nI0828 04:43:52.012914 3372 log.go:172] (0x4000776000) (5) Data frame handling\nI0828 04:43:52.013071 3372 log.go:172] (0x4000776000) (5) Data frame sent\nI0828 04:43:52.013183 3372 log.go:172] (0x40002280b0) Data frame received for 5\nI0828 04:43:52.013255 3372 log.go:172] (0x4000776000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0828 04:43:52.013661 3372 log.go:172] (0x40002280b0) Go away received\nI0828 04:43:52.016092 3372 log.go:172] (0x40002280b0) (0x400080dcc0) Stream removed, broadcasting: 1\nI0828 04:43:52.016682 3372 log.go:172] (0x40002280b0) (0x400080dd60) Stream removed, broadcasting: 3\nI0828 04:43:52.017014 3372 log.go:172] (0x40002280b0) (0x4000776000) Stream removed, broadcasting: 5\n" Aug 28 04:43:52.027: INFO: stdout: "" Aug 28 04:43:52.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8661 execpodq4qrt -- /bin/sh -x -c nc -zv -t -w 2 10.101.219.85 80' Aug 28 04:43:53.561: INFO: stderr: "I0828 04:43:53.448911 3395 log.go:172] (0x400001a000) (0x4000984000) Create stream\nI0828 04:43:53.452508 3395 log.go:172] (0x400001a000) (0x4000984000) Stream added, broadcasting: 1\nI0828 04:43:53.466298 3395 log.go:172] (0x400001a000) Reply frame received for 1\nI0828 04:43:53.467015 3395 log.go:172] (0x400001a000) (0x400050f5e0) Create stream\nI0828 04:43:53.467086 3395 log.go:172] (0x400001a000) (0x400050f5e0) Stream added, broadcasting: 3\nI0828 04:43:53.469102 3395 log.go:172] (0x400001a000) Reply frame received for 3\nI0828 04:43:53.469635 3395 log.go:172] (0x400001a000) (0x4000a12000) Create stream\nI0828 04:43:53.469754 3395 log.go:172] (0x400001a000) (0x4000a12000) Stream added, broadcasting: 5\nI0828 04:43:53.471481 3395 log.go:172] (0x400001a000) Reply frame received for 5\nI0828 04:43:53.537973 3395 log.go:172] (0x400001a000) Data frame received for 3\nI0828 04:43:53.538351 3395 log.go:172] (0x400001a000) Data frame received for 5\nI0828 04:43:53.538560 3395 log.go:172] (0x4000a12000) (5) Data frame handling\nI0828 04:43:53.538692 3395 log.go:172] (0x400050f5e0) (3) Data frame handling\nI0828 04:43:53.539000 3395 log.go:172] (0x400001a000) Data frame received for 1\nI0828 04:43:53.539090 3395 log.go:172] (0x4000984000) (1) Data frame handling\nI0828 04:43:53.539863 3395 log.go:172] (0x4000a12000) (5) Data frame sent\nI0828 04:43:53.540323 3395 log.go:172] (0x4000984000) (1) Data frame sent\nI0828 04:43:53.540568 3395 log.go:172] (0x400001a000) Data frame received for 5\nI0828 04:43:53.540668 3395 log.go:172] (0x4000a12000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.219.85 80\nConnection to 10.101.219.85 80 port [tcp/http] succeeded!\nI0828 04:43:53.542671 3395 log.go:172] (0x400001a000) (0x4000984000) Stream removed, broadcasting: 1\nI0828 04:43:53.544941 3395 log.go:172] (0x400001a000) Go away received\nI0828 04:43:53.547774 3395 log.go:172] (0x400001a000) (0x4000984000) Stream removed, broadcasting: 1\nI0828 04:43:53.548032 3395 log.go:172] (0x400001a000) (0x400050f5e0) Stream removed, broadcasting: 3\nI0828 04:43:53.548223 3395 log.go:172] (0x400001a000) (0x4000a12000) Stream removed, broadcasting: 5\n" Aug 28 04:43:53.561: INFO: stdout: "" Aug 28 04:43:53.562: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:43:53.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8661" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.999 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":157,"skipped":2714,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:43:53.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-f55ea51b-74e3-45cc-903f-6f8cb2f2c628 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:44:04.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6135" for this suite. • [SLOW TEST:10.499 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:44:04.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6931 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 28 04:44:04.373: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 28 04:44:32.611: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.143:8080/dial?request=hostname&protocol=udp&host=10.244.2.142&port=8081&tries=1'] Namespace:pod-network-test-6931 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:44:32.611: INFO: >>> kubeConfig: /root/.kube/config I0828 04:44:32.675031 8 log.go:172] (0x40028b04d0) (0x4000d63220) Create stream I0828 04:44:32.675237 8 log.go:172] (0x40028b04d0) (0x4000d63220) Stream added, broadcasting: 1 I0828 04:44:32.678846 8 log.go:172] (0x40028b04d0) Reply frame received for 1 I0828 04:44:32.679093 8 log.go:172] (0x40028b04d0) (0x4000d632c0) Create stream I0828 04:44:32.679215 8 log.go:172] (0x40028b04d0) (0x4000d632c0) Stream added, broadcasting: 3 I0828 04:44:32.681115 8 log.go:172] (0x40028b04d0) Reply frame received for 3 I0828 04:44:32.681271 8 log.go:172] (0x40028b04d0) (0x4000957a40) Create stream I0828 04:44:32.681361 8 log.go:172] (0x40028b04d0) (0x4000957a40) Stream added, broadcasting: 5 I0828 04:44:32.683041 8 log.go:172] (0x40028b04d0) Reply frame received for 5 I0828 04:44:32.762421 8 log.go:172] (0x40028b04d0) Data frame received for 3 I0828 04:44:32.762673 8 log.go:172] (0x40028b04d0) Data frame received for 5 I0828 04:44:32.762814 8 log.go:172] (0x4000957a40) (5) Data frame handling I0828 04:44:32.763005 8 log.go:172] (0x4000d632c0) (3) Data frame handling I0828 04:44:32.763110 8 log.go:172] (0x4000d632c0) (3) Data frame sent I0828 04:44:32.763201 8 log.go:172] (0x40028b04d0) Data frame received for 3 I0828 04:44:32.763281 8 log.go:172] (0x4000d632c0) (3) Data frame handling I0828 04:44:32.765011 8 log.go:172] (0x40028b04d0) Data frame received for 1 I0828 04:44:32.765078 8 log.go:172] (0x4000d63220) (1) Data frame handling I0828 04:44:32.765162 8 log.go:172] (0x4000d63220) (1) Data frame sent I0828 04:44:32.765273 8 log.go:172] (0x40028b04d0) (0x4000d63220) Stream removed, broadcasting: 1 I0828 04:44:32.765375 8 log.go:172] (0x40028b04d0) Go away received I0828 04:44:32.765800 8 log.go:172] (0x40028b04d0) (0x4000d63220) Stream removed, broadcasting: 1 I0828 04:44:32.765948 8 log.go:172] (0x40028b04d0) (0x4000d632c0) Stream removed, broadcasting: 3 I0828 04:44:32.766064 8 log.go:172] (0x40028b04d0) (0x4000957a40) Stream removed, broadcasting: 5 Aug 28 04:44:32.766: INFO: Waiting for responses: map[] Aug 28 04:44:32.771: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.143:8080/dial?request=hostname&protocol=udp&host=10.244.1.24&port=8081&tries=1'] Namespace:pod-network-test-6931 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 28 04:44:32.771: INFO: >>> kubeConfig: /root/.kube/config I0828 04:44:32.830671 8 log.go:172] (0x40031c22c0) (0x4001fd83c0) Create stream I0828 04:44:32.830809 8 log.go:172] (0x40031c22c0) (0x4001fd83c0) Stream added, broadcasting: 1 I0828 04:44:32.833829 8 log.go:172] (0x40031c22c0) Reply frame received for 1 I0828 04:44:32.833965 8 log.go:172] (0x40031c22c0) (0x4001fd8500) Create stream I0828 04:44:32.834035 8 log.go:172] (0x40031c22c0) (0x4001fd8500) Stream added, broadcasting: 3 I0828 04:44:32.835415 8 log.go:172] (0x40031c22c0) Reply frame received for 3 I0828 04:44:32.835541 8 log.go:172] (0x40031c22c0) (0x4001acc1e0) Create stream I0828 04:44:32.835619 8 log.go:172] (0x40031c22c0) (0x4001acc1e0) Stream added, broadcasting: 5 I0828 04:44:32.836847 8 log.go:172] (0x40031c22c0) Reply frame received for 5 I0828 04:44:32.908519 8 log.go:172] (0x40031c22c0) Data frame received for 3 I0828 04:44:32.908860 8 log.go:172] (0x4001fd8500) (3) Data frame handling I0828 04:44:32.908989 8 log.go:172] (0x4001fd8500) (3) Data frame sent I0828 04:44:32.909120 8 log.go:172] (0x40031c22c0) Data frame received for 5 I0828 04:44:32.909279 8 log.go:172] (0x4001acc1e0) (5) Data frame handling I0828 04:44:32.909374 8 log.go:172] (0x40031c22c0) Data frame received for 3 I0828 04:44:32.909513 8 log.go:172] (0x4001fd8500) (3) Data frame handling I0828 04:44:32.910202 8 log.go:172] (0x40031c22c0) Data frame received for 1 I0828 04:44:32.910272 8 log.go:172] (0x4001fd83c0) (1) Data frame handling I0828 04:44:32.910335 8 log.go:172] (0x4001fd83c0) (1) Data frame sent I0828 04:44:32.910396 8 log.go:172] (0x40031c22c0) (0x4001fd83c0) Stream removed, broadcasting: 1 I0828 04:44:32.910464 8 log.go:172] (0x40031c22c0) Go away received I0828 04:44:32.910822 8 log.go:172] (0x40031c22c0) (0x4001fd83c0) Stream removed, broadcasting: 1 I0828 04:44:32.910916 8 log.go:172] (0x40031c22c0) (0x4001fd8500) Stream removed, broadcasting: 3 I0828 04:44:32.911025 8 log.go:172] (0x40031c22c0) (0x4001acc1e0) Stream removed, broadcasting: 5 Aug 28 04:44:32.911: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:44:32.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6931" for this suite. • [SLOW TEST:28.660 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2780,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:44:32.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:44:33.044: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:44:40.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4178" for this suite. • [SLOW TEST:7.584 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":160,"skipped":2786,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:44:40.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-5bed6db2-8065-4213-b325-832dabbb46d6 STEP: Creating a pod to test consume configMaps Aug 28 04:44:40.803: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac" in namespace "projected-5472" to be "success or failure" Aug 28 04:44:40.807: INFO: Pod "pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274956ms Aug 28 04:44:42.818: INFO: Pod "pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014763348s Aug 28 04:44:44.824: INFO: Pod "pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020811611s Aug 28 04:44:46.829: INFO: Pod "pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026325067s STEP: Saw pod success Aug 28 04:44:46.829: INFO: Pod "pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac" satisfied condition "success or failure" Aug 28 04:44:46.832: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac container projected-configmap-volume-test: STEP: delete the pod Aug 28 04:44:46.877: INFO: Waiting for pod pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac to disappear Aug 28 04:44:46.892: INFO: Pod pod-projected-configmaps-10ed9c6a-746a-4cef-a379-38746641d1ac no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:44:46.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5472" for this suite. • [SLOW TEST:6.395 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2805,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:44:46.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-ac51f7fd-dc60-402a-8529-c06615a6c44b STEP: Creating a pod to test consume configMaps Aug 28 04:44:47.088: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513" in namespace "configmap-4097" to be "success or failure" Aug 28 04:44:47.096: INFO: Pod "pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513": Phase="Pending", Reason="", readiness=false. Elapsed: 7.498842ms Aug 28 04:44:49.103: INFO: Pod "pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015293091s Aug 28 04:44:51.110: INFO: Pod "pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513": Phase="Running", Reason="", readiness=true. Elapsed: 4.021824403s Aug 28 04:44:53.117: INFO: Pod "pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0291009s STEP: Saw pod success Aug 28 04:44:53.118: INFO: Pod "pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513" satisfied condition "success or failure" Aug 28 04:44:53.123: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513 container configmap-volume-test: STEP: delete the pod Aug 28 04:44:53.162: INFO: Waiting for pod pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513 to disappear Aug 28 04:44:53.198: INFO: Pod pod-configmaps-1a6b9957-5a24-4c5d-9325-aace8a848513 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:44:53.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4097" for this suite. • [SLOW TEST:6.304 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2815,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:44:53.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:44:53.501: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f723d00e-c797-4edd-806e-bd869cb98662" in namespace "security-context-test-8183" to be "success or failure" Aug 28 04:44:53.520: INFO: Pod "busybox-user-65534-f723d00e-c797-4edd-806e-bd869cb98662": Phase="Pending", Reason="", readiness=false. Elapsed: 18.336626ms Aug 28 04:44:55.608: INFO: Pod "busybox-user-65534-f723d00e-c797-4edd-806e-bd869cb98662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10600764s Aug 28 04:44:57.614: INFO: Pod "busybox-user-65534-f723d00e-c797-4edd-806e-bd869cb98662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112518488s Aug 28 04:44:57.614: INFO: Pod "busybox-user-65534-f723d00e-c797-4edd-806e-bd869cb98662" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:44:57.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8183" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2818,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:44:57.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:44:57.985: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:44:58.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3141" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":164,"skipped":2854,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:44:58.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-b0ef6310-d542-435f-b144-0a3dc8c808d4 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:44:58.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7970" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":165,"skipped":2867,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:44:58.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 04:45:01.843: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 04:45:03.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186701, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186701, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186701, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186701, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 04:45:05.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186701, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186701, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186701, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186701, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 04:45:09.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:45:09.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7027" for this suite. STEP: Destroying namespace "webhook-7027-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.478 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":166,"skipped":2874,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:45:09.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-72738d46-7dc3-428f-ab88-09ec4587e48c STEP: Creating a pod to test consume configMaps Aug 28 04:45:09.430: INFO: Waiting up to 5m0s for pod "pod-configmaps-26fc30ea-ca49-40e5-8923-73328bb25e31" in namespace "configmap-1663" to be "success or failure" Aug 28 04:45:09.461: INFO: Pod "pod-configmaps-26fc30ea-ca49-40e5-8923-73328bb25e31": Phase="Pending", Reason="", readiness=false. Elapsed: 31.016339ms Aug 28 04:45:11.467: INFO: Pod "pod-configmaps-26fc30ea-ca49-40e5-8923-73328bb25e31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037226735s Aug 28 04:45:13.474: INFO: Pod "pod-configmaps-26fc30ea-ca49-40e5-8923-73328bb25e31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043537407s STEP: Saw pod success Aug 28 04:45:13.474: INFO: Pod "pod-configmaps-26fc30ea-ca49-40e5-8923-73328bb25e31" satisfied condition "success or failure" Aug 28 04:45:13.478: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-26fc30ea-ca49-40e5-8923-73328bb25e31 container configmap-volume-test: STEP: delete the pod Aug 28 04:45:13.499: INFO: Waiting for pod pod-configmaps-26fc30ea-ca49-40e5-8923-73328bb25e31 to disappear Aug 28 04:45:13.630: INFO: Pod pod-configmaps-26fc30ea-ca49-40e5-8923-73328bb25e31 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:45:13.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1663" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2884,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:45:13.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 28 04:45:46.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4498" for this suite. STEP: Destroying namespace "nsdeletetest-8390" for this suite. Aug 28 04:45:46.325: INFO: Namespace nsdeletetest-8390 was already deleted STEP: Destroying namespace "nsdeletetest-165" for this suite. • [SLOW TEST:32.688 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":168,"skipped":2886,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 28 04:45:46.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 28 04:45:46.470: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0828 04:46:28.682302       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 28 04:46:28.682: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:46:28.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2065" for this suite.

• [SLOW TEST:42.141 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":170,"skipped":2915,"failed":0}
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:46:28.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8587
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-8587
STEP: creating replication controller externalsvc in namespace services-8587
I0828 04:46:28.981238       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8587, replica count: 2
I0828 04:46:32.032494       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 04:46:35.033222       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 28 04:46:35.544: INFO: Creating new exec pod
Aug 28 04:46:41.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8587 execpodtdx2p -- /bin/sh -x -c nslookup clusterip-service'
Aug 28 04:46:43.547: INFO: stderr: "I0828 04:46:43.395472    3419 log.go:172] (0x40006e80b0) (0x4000714140) Create stream\nI0828 04:46:43.401176    3419 log.go:172] (0x40006e80b0) (0x4000714140) Stream added, broadcasting: 1\nI0828 04:46:43.416471    3419 log.go:172] (0x40006e80b0) Reply frame received for 1\nI0828 04:46:43.417761    3419 log.go:172] (0x40006e80b0) (0x40007f3c20) Create stream\nI0828 04:46:43.417879    3419 log.go:172] (0x40006e80b0) (0x40007f3c20) Stream added, broadcasting: 3\nI0828 04:46:43.419397    3419 log.go:172] (0x40006e80b0) Reply frame received for 3\nI0828 04:46:43.419647    3419 log.go:172] (0x40006e80b0) (0x40007141e0) Create stream\nI0828 04:46:43.419729    3419 log.go:172] (0x40006e80b0) (0x40007141e0) Stream added, broadcasting: 5\nI0828 04:46:43.421007    3419 log.go:172] (0x40006e80b0) Reply frame received for 5\nI0828 04:46:43.513287    3419 log.go:172] (0x40006e80b0) Data frame received for 5\nI0828 04:46:43.513731    3419 log.go:172] (0x40007141e0) (5) Data frame handling\nI0828 04:46:43.514705    3419 log.go:172] (0x40007141e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0828 04:46:43.523021    3419 log.go:172] (0x40006e80b0) Data frame received for 3\nI0828 04:46:43.523117    3419 log.go:172] (0x40007f3c20) (3) Data frame handling\nI0828 04:46:43.523192    3419 log.go:172] (0x40007f3c20) (3) Data frame sent\nI0828 04:46:43.523998    3419 log.go:172] (0x40006e80b0) Data frame received for 3\nI0828 04:46:43.524068    3419 log.go:172] (0x40007f3c20) (3) Data frame handling\nI0828 04:46:43.524138    3419 log.go:172] (0x40007f3c20) (3) Data frame sent\nI0828 04:46:43.524808    3419 log.go:172] (0x40006e80b0) Data frame received for 5\nI0828 04:46:43.524928    3419 log.go:172] (0x40006e80b0) Data frame received for 3\nI0828 04:46:43.525030    3419 log.go:172] (0x40007f3c20) (3) Data frame handling\nI0828 04:46:43.525103    3419 log.go:172] (0x40007141e0) (5) Data frame handling\nI0828 04:46:43.527490    3419 log.go:172] (0x40006e80b0) Data frame received for 1\nI0828 04:46:43.527572    3419 log.go:172] (0x4000714140) (1) Data frame handling\nI0828 04:46:43.527648    3419 log.go:172] (0x4000714140) (1) Data frame sent\nI0828 04:46:43.529282    3419 log.go:172] (0x40006e80b0) (0x4000714140) Stream removed, broadcasting: 1\nI0828 04:46:43.532142    3419 log.go:172] (0x40006e80b0) Go away received\nI0828 04:46:43.535332    3419 log.go:172] (0x40006e80b0) (0x4000714140) Stream removed, broadcasting: 1\nI0828 04:46:43.535612    3419 log.go:172] (0x40006e80b0) (0x40007f3c20) Stream removed, broadcasting: 3\nI0828 04:46:43.535790    3419 log.go:172] (0x40006e80b0) (0x40007141e0) Stream removed, broadcasting: 5\n"
Aug 28 04:46:43.547: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8587.svc.cluster.local\tcanonical name = externalsvc.services-8587.svc.cluster.local.\nName:\texternalsvc.services-8587.svc.cluster.local\nAddress: 10.102.84.169\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-8587, will wait for the garbage collector to delete the pods
Aug 28 04:46:43.613: INFO: Deleting ReplicationController externalsvc took: 10.365816ms
Aug 28 04:46:44.014: INFO: Terminating ReplicationController externalsvc pods took: 400.879425ms
Aug 28 04:46:52.550: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:46:52.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8587" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:24.165 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":171,"skipped":2915,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:46:52.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 28 04:46:52.951: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-a 0bfd7bc1-0053-47cb-bf08-896b9ad07dbd 4492791 0 2020-08-28 04:46:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 28 04:46:52.952: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-a 0bfd7bc1-0053-47cb-bf08-896b9ad07dbd 4492791 0 2020-08-28 04:46:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 28 04:47:02.966: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-a 0bfd7bc1-0053-47cb-bf08-896b9ad07dbd 4492841 0 2020-08-28 04:46:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 28 04:47:02.967: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-a 0bfd7bc1-0053-47cb-bf08-896b9ad07dbd 4492841 0 2020-08-28 04:46:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 28 04:47:12.982: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-a 0bfd7bc1-0053-47cb-bf08-896b9ad07dbd 4492871 0 2020-08-28 04:46:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 28 04:47:12.983: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-a 0bfd7bc1-0053-47cb-bf08-896b9ad07dbd 4492871 0 2020-08-28 04:46:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 28 04:47:22.993: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-a 0bfd7bc1-0053-47cb-bf08-896b9ad07dbd 4492901 0 2020-08-28 04:46:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 28 04:47:22.994: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-a 0bfd7bc1-0053-47cb-bf08-896b9ad07dbd 4492901 0 2020-08-28 04:46:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 28 04:47:33.005: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-b 570cde35-4b6c-4586-9daa-79e245cd778f 4492931 0 2020-08-28 04:47:32 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 28 04:47:33.005: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-b 570cde35-4b6c-4586-9daa-79e245cd778f 4492931 0 2020-08-28 04:47:32 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 28 04:47:43.016: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-b 570cde35-4b6c-4586-9daa-79e245cd778f 4492961 0 2020-08-28 04:47:32 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 28 04:47:43.016: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4760 /api/v1/namespaces/watch-4760/configmaps/e2e-watch-test-configmap-b 570cde35-4b6c-4586-9daa-79e245cd778f 4492961 0 2020-08-28 04:47:32 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:47:53.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4760" for this suite.

• [SLOW TEST:60.176 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":172,"skipped":2941,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:47:53.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4055
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-4055
I0828 04:47:53.245313       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4055, replica count: 2
I0828 04:47:56.296930       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 04:47:59.297832       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 28 04:47:59.298: INFO: Creating new exec pod
Aug 28 04:48:04.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4055 execpodv9wzv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 28 04:48:05.824: INFO: stderr: "I0828 04:48:05.677444    3442 log.go:172] (0x4000a76000) (0x4000803ae0) Create stream\nI0828 04:48:05.682233    3442 log.go:172] (0x4000a76000) (0x4000803ae0) Stream added, broadcasting: 1\nI0828 04:48:05.697269    3442 log.go:172] (0x4000a76000) Reply frame received for 1\nI0828 04:48:05.697878    3442 log.go:172] (0x4000a76000) (0x4000766000) Create stream\nI0828 04:48:05.697946    3442 log.go:172] (0x4000a76000) (0x4000766000) Stream added, broadcasting: 3\nI0828 04:48:05.699616    3442 log.go:172] (0x4000a76000) Reply frame received for 3\nI0828 04:48:05.700010    3442 log.go:172] (0x4000a76000) (0x4000770000) Create stream\nI0828 04:48:05.700111    3442 log.go:172] (0x4000a76000) (0x4000770000) Stream added, broadcasting: 5\nI0828 04:48:05.701575    3442 log.go:172] (0x4000a76000) Reply frame received for 5\nI0828 04:48:05.802258    3442 log.go:172] (0x4000a76000) Data frame received for 5\nI0828 04:48:05.803159    3442 log.go:172] (0x4000a76000) Data frame received for 1\nI0828 04:48:05.803433    3442 log.go:172] (0x4000803ae0) (1) Data frame handling\nI0828 04:48:05.803910    3442 log.go:172] (0x4000a76000) Data frame received for 3\nI0828 04:48:05.804050    3442 log.go:172] (0x4000766000) (3) Data frame handling\nI0828 04:48:05.804272    3442 log.go:172] (0x4000770000) (5) Data frame handling\nI0828 04:48:05.807110    3442 log.go:172] (0x4000770000) (5) Data frame sent\nI0828 04:48:05.807202    3442 log.go:172] (0x4000803ae0) (1) Data frame sent\nI0828 04:48:05.807526    3442 log.go:172] (0x4000a76000) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0828 04:48:05.807604    3442 log.go:172] (0x4000770000) (5) Data frame handling\nI0828 04:48:05.808654    3442 log.go:172] (0x4000a76000) (0x4000803ae0) Stream removed, broadcasting: 1\nI0828 04:48:05.810293    3442 log.go:172] (0x4000a76000) Go away received\nI0828 04:48:05.813516    3442 log.go:172] (0x4000a76000) (0x4000803ae0) Stream removed, broadcasting: 1\nI0828 04:48:05.813901    3442 log.go:172] (0x4000a76000) (0x4000766000) Stream removed, broadcasting: 3\nI0828 04:48:05.814160    3442 log.go:172] (0x4000a76000) (0x4000770000) Stream removed, broadcasting: 5\n"
Aug 28 04:48:05.825: INFO: stdout: ""
Aug 28 04:48:05.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4055 execpodv9wzv -- /bin/sh -x -c nc -zv -t -w 2 10.98.220.208 80'
Aug 28 04:48:07.328: INFO: stderr: "I0828 04:48:07.217242    3465 log.go:172] (0x4000a940b0) (0x40005695e0) Create stream\nI0828 04:48:07.221311    3465 log.go:172] (0x4000a940b0) (0x40005695e0) Stream added, broadcasting: 1\nI0828 04:48:07.232781    3465 log.go:172] (0x4000a940b0) Reply frame received for 1\nI0828 04:48:07.233426    3465 log.go:172] (0x4000a940b0) (0x4000a0c000) Create stream\nI0828 04:48:07.233516    3465 log.go:172] (0x4000a940b0) (0x4000a0c000) Stream added, broadcasting: 3\nI0828 04:48:07.235482    3465 log.go:172] (0x4000a940b0) Reply frame received for 3\nI0828 04:48:07.235914    3465 log.go:172] (0x4000a940b0) (0x4000bf40a0) Create stream\nI0828 04:48:07.236030    3465 log.go:172] (0x4000a940b0) (0x4000bf40a0) Stream added, broadcasting: 5\nI0828 04:48:07.237690    3465 log.go:172] (0x4000a940b0) Reply frame received for 5\nI0828 04:48:07.302679    3465 log.go:172] (0x4000a940b0) Data frame received for 5\nI0828 04:48:07.303613    3465 log.go:172] (0x4000a940b0) Data frame received for 3\nI0828 04:48:07.303771    3465 log.go:172] (0x4000a0c000) (3) Data frame handling\nI0828 04:48:07.304320    3465 log.go:172] (0x4000a940b0) Data frame received for 1\nI0828 04:48:07.304499    3465 log.go:172] (0x4000bf40a0) (5) Data frame handling\nI0828 04:48:07.304713    3465 log.go:172] (0x40005695e0) (1) Data frame handling\nI0828 04:48:07.307689    3465 log.go:172] (0x40005695e0) (1) Data frame sent\n+ nc -zv -t -w 2 10.98.220.208 80\nConnection to 10.98.220.208 80 port [tcp/http] succeeded!\nI0828 04:48:07.308006    3465 log.go:172] (0x4000bf40a0) (5) Data frame sent\nI0828 04:48:07.308237    3465 log.go:172] (0x4000a940b0) Data frame received for 5\nI0828 04:48:07.308323    3465 log.go:172] (0x4000bf40a0) (5) Data frame handling\nI0828 04:48:07.309361    3465 log.go:172] (0x4000a940b0) (0x40005695e0) Stream removed, broadcasting: 1\nI0828 04:48:07.309917    3465 log.go:172] (0x4000a940b0) Go away received\nI0828 04:48:07.313341    3465 log.go:172] (0x4000a940b0) (0x40005695e0) Stream removed, broadcasting: 1\nI0828 04:48:07.313689    3465 log.go:172] (0x4000a940b0) (0x4000a0c000) Stream removed, broadcasting: 3\nI0828 04:48:07.313964    3465 log.go:172] (0x4000a940b0) (0x4000bf40a0) Stream removed, broadcasting: 5\n"
Aug 28 04:48:07.328: INFO: stdout: ""
Aug 28 04:48:07.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4055 execpodv9wzv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 32238'
Aug 28 04:48:08.840: INFO: stderr: "I0828 04:48:08.733315    3489 log.go:172] (0x40003c0000) (0x4000a26000) Create stream\nI0828 04:48:08.737780    3489 log.go:172] (0x40003c0000) (0x4000a26000) Stream added, broadcasting: 1\nI0828 04:48:08.750709    3489 log.go:172] (0x40003c0000) Reply frame received for 1\nI0828 04:48:08.751368    3489 log.go:172] (0x40003c0000) (0x40008039a0) Create stream\nI0828 04:48:08.751431    3489 log.go:172] (0x40003c0000) (0x40008039a0) Stream added, broadcasting: 3\nI0828 04:48:08.753445    3489 log.go:172] (0x40003c0000) Reply frame received for 3\nI0828 04:48:08.754071    3489 log.go:172] (0x40003c0000) (0x4000a22000) Create stream\nI0828 04:48:08.754231    3489 log.go:172] (0x40003c0000) (0x4000a22000) Stream added, broadcasting: 5\nI0828 04:48:08.755992    3489 log.go:172] (0x40003c0000) Reply frame received for 5\nI0828 04:48:08.810836    3489 log.go:172] (0x40003c0000) Data frame received for 3\nI0828 04:48:08.811328    3489 log.go:172] (0x40003c0000) Data frame received for 5\nI0828 04:48:08.811528    3489 log.go:172] (0x4000a22000) (5) Data frame handling\nI0828 04:48:08.811734    3489 log.go:172] (0x40008039a0) (3) Data frame handling\nI0828 04:48:08.813478    3489 log.go:172] (0x40003c0000) Data frame received for 1\nI0828 04:48:08.813581    3489 log.go:172] (0x4000a26000) (1) Data frame handling\nI0828 04:48:08.813696    3489 log.go:172] (0x4000a22000) (5) Data frame sent\nI0828 04:48:08.813945    3489 log.go:172] (0x4000a26000) (1) Data frame sent\nI0828 04:48:08.814077    3489 log.go:172] (0x40003c0000) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.6 32238\nI0828 04:48:08.814173    3489 log.go:172] (0x4000a22000) (5) Data frame handling\nConnection to 172.18.0.6 32238 port [tcp/32238] succeeded!\nI0828 04:48:08.815422    3489 log.go:172] (0x4000a22000) (5) Data frame sent\nI0828 04:48:08.815629    3489 log.go:172] (0x40003c0000) Data frame received for 5\nI0828 04:48:08.815748    3489 log.go:172] (0x4000a22000) (5) Data frame handling\nI0828 04:48:08.817583    3489 log.go:172] (0x40003c0000) (0x4000a26000) Stream removed, broadcasting: 1\nI0828 04:48:08.819060    3489 log.go:172] (0x40003c0000) Go away received\nI0828 04:48:08.823778    3489 log.go:172] (0x40003c0000) (0x4000a26000) Stream removed, broadcasting: 1\nI0828 04:48:08.824173    3489 log.go:172] (0x40003c0000) (0x40008039a0) Stream removed, broadcasting: 3\nI0828 04:48:08.824409    3489 log.go:172] (0x40003c0000) (0x4000a22000) Stream removed, broadcasting: 5\n"
Aug 28 04:48:08.841: INFO: stdout: ""
Aug 28 04:48:08.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4055 execpodv9wzv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 32238'
Aug 28 04:48:10.310: INFO: stderr: "I0828 04:48:10.181867    3513 log.go:172] (0x4000a4a000) (0x400096a000) Create stream\nI0828 04:48:10.185965    3513 log.go:172] (0x4000a4a000) (0x400096a000) Stream added, broadcasting: 1\nI0828 04:48:10.197412    3513 log.go:172] (0x4000a4a000) Reply frame received for 1\nI0828 04:48:10.198093    3513 log.go:172] (0x4000a4a000) (0x400096a0a0) Create stream\nI0828 04:48:10.198158    3513 log.go:172] (0x4000a4a000) (0x400096a0a0) Stream added, broadcasting: 3\nI0828 04:48:10.199528    3513 log.go:172] (0x4000a4a000) Reply frame received for 3\nI0828 04:48:10.199783    3513 log.go:172] (0x4000a4a000) (0x400096a1e0) Create stream\nI0828 04:48:10.199837    3513 log.go:172] (0x4000a4a000) (0x400096a1e0) Stream added, broadcasting: 5\nI0828 04:48:10.200830    3513 log.go:172] (0x4000a4a000) Reply frame received for 5\nI0828 04:48:10.285402    3513 log.go:172] (0x4000a4a000) Data frame received for 3\nI0828 04:48:10.285714    3513 log.go:172] (0x400096a0a0) (3) Data frame handling\nI0828 04:48:10.285882    3513 log.go:172] (0x4000a4a000) Data frame received for 5\nI0828 04:48:10.285988    3513 log.go:172] (0x400096a1e0) (5) Data frame handling\nI0828 04:48:10.287323    3513 log.go:172] (0x4000a4a000) Data frame received for 1\nI0828 04:48:10.287503    3513 log.go:172] (0x400096a000) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 32238\nConnection to 172.18.0.3 32238 port [tcp/32238] succeeded!\nI0828 04:48:10.288493    3513 log.go:172] (0x400096a000) (1) Data frame sent\nI0828 04:48:10.288700    3513 log.go:172] (0x400096a1e0) (5) Data frame sent\nI0828 04:48:10.288907    3513 log.go:172] (0x4000a4a000) Data frame received for 5\nI0828 04:48:10.288990    3513 log.go:172] (0x400096a1e0) (5) Data frame handling\nI0828 04:48:10.290044    3513 log.go:172] (0x4000a4a000) (0x400096a000) Stream removed, broadcasting: 1\nI0828 04:48:10.294296    3513 log.go:172] (0x4000a4a000) Go away received\nI0828 04:48:10.296414    3513 log.go:172] (0x4000a4a000) (0x400096a000) Stream removed, broadcasting: 1\nI0828 04:48:10.296970    3513 log.go:172] (0x4000a4a000) (0x400096a0a0) Stream removed, broadcasting: 3\nI0828 04:48:10.297361    3513 log.go:172] (0x4000a4a000) (0x400096a1e0) Stream removed, broadcasting: 5\n"
Aug 28 04:48:10.311: INFO: stdout: ""
Aug 28 04:48:10.311: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:48:10.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4055" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.322 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":173,"skipped":2966,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:48:10.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 28 04:48:14.824: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:48:14.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-104" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2969,"failed":0}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:48:14.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 28 04:48:23.024: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 04:48:23.052: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 04:48:25.052: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 04:48:25.060: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 04:48:27.053: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 04:48:27.059: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 04:48:29.052: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 04:48:29.060: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 04:48:31.053: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 04:48:31.059: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 04:48:33.052: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 04:48:33.060: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:48:33.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1898" for this suite.

• [SLOW TEST:18.219 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2969,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:48:33.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Aug 28 04:48:33.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1821 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 28 04:48:38.298: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0828 04:48:38.146258    3537 log.go:172] (0x4000ab00b0) (0x400080fb80) Create stream\nI0828 04:48:38.152091    3537 log.go:172] (0x4000ab00b0) (0x400080fb80) Stream added, broadcasting: 1\nI0828 04:48:38.164602    3537 log.go:172] (0x4000ab00b0) Reply frame received for 1\nI0828 04:48:38.165687    3537 log.go:172] (0x4000ab00b0) (0x40007aa0a0) Create stream\nI0828 04:48:38.165799    3537 log.go:172] (0x4000ab00b0) (0x40007aa0a0) Stream added, broadcasting: 3\nI0828 04:48:38.167706    3537 log.go:172] (0x4000ab00b0) Reply frame received for 3\nI0828 04:48:38.168183    3537 log.go:172] (0x4000ab00b0) (0x40009900a0) Create stream\nI0828 04:48:38.168324    3537 log.go:172] (0x4000ab00b0) (0x40009900a0) Stream added, broadcasting: 5\nI0828 04:48:38.169770    3537 log.go:172] (0x4000ab00b0) Reply frame received for 5\nI0828 04:48:38.170117    3537 log.go:172] (0x4000ab00b0) (0x400080fc20) Create stream\nI0828 04:48:38.170199    3537 log.go:172] (0x4000ab00b0) (0x400080fc20) Stream added, broadcasting: 7\nI0828 04:48:38.171582    3537 log.go:172] (0x4000ab00b0) Reply frame received for 7\nI0828 04:48:38.174953    3537 log.go:172] (0x40007aa0a0) (3) Writing data frame\nI0828 04:48:38.175935    3537 log.go:172] (0x40007aa0a0) (3) Writing data frame\nI0828 04:48:38.177114    3537 log.go:172] (0x4000ab00b0) Data frame received for 5\nI0828 04:48:38.177358    3537 log.go:172] (0x40009900a0) (5) Data frame handling\nI0828 04:48:38.177696    3537 log.go:172] (0x40009900a0) (5) Data frame sent\nI0828 04:48:38.178060    3537 log.go:172] (0x4000ab00b0) Data frame received for 5\nI0828 04:48:38.178141    3537 log.go:172] (0x40009900a0) (5) Data frame handling\nI0828 04:48:38.178230    3537 log.go:172] (0x40009900a0) (5) Data frame sent\nI0828 04:48:38.228268    3537 log.go:172] (0x4000ab00b0) Data frame received for 7\nI0828 04:48:38.228520    3537 log.go:172] (0x400080fc20) (7) Data frame handling\nI0828 04:48:38.228895    3537 log.go:172] (0x4000ab00b0) Data frame received for 1\nI0828 04:48:38.229095    3537 log.go:172] (0x400080fb80) (1) Data frame handling\nI0828 04:48:38.229288    3537 log.go:172] (0x400080fb80) (1) Data frame sent\nI0828 04:48:38.229549    3537 log.go:172] (0x4000ab00b0) Data frame received for 5\nI0828 04:48:38.229674    3537 log.go:172] (0x40009900a0) (5) Data frame handling\nI0828 04:48:38.231795    3537 log.go:172] (0x4000ab00b0) (0x40007aa0a0) Stream removed, broadcasting: 3\nI0828 04:48:38.232412    3537 log.go:172] (0x4000ab00b0) (0x400080fb80) Stream removed, broadcasting: 1\nI0828 04:48:38.235434    3537 log.go:172] (0x4000ab00b0) Go away received\nI0828 04:48:38.237941    3537 log.go:172] (0x4000ab00b0) (0x400080fb80) Stream removed, broadcasting: 1\nI0828 04:48:38.238200    3537 log.go:172] (0x4000ab00b0) (0x40007aa0a0) Stream removed, broadcasting: 3\nI0828 04:48:38.238292    3537 log.go:172] (0x4000ab00b0) (0x40009900a0) Stream removed, broadcasting: 5\nI0828 04:48:38.238465    3537 log.go:172] (0x4000ab00b0) (0x400080fc20) Stream removed, broadcasting: 7\n"
Aug 28 04:48:38.299: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:48:40.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1821" for this suite.

• [SLOW TEST:7.250 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":176,"skipped":2981,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:48:40.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 28 04:48:49.288: INFO: Successfully updated pod "labelsupdate0eefe9b9-3e31-4f9c-90eb-b634d7c87163"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:48:53.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7216" for this suite.

• [SLOW TEST:13.024 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2995,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:48:53.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:48:57.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4554" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2999,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:48:57.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 28 04:48:57.696: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 28 04:48:57.718: INFO: Waiting for terminating namespaces to be deleted...
Aug 28 04:48:57.722: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 28 04:48:57.745: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.745: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 04:48:57.745: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.745: INFO: 	Container app ready: true, restart count 0
Aug 28 04:48:57.745: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.745: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 04:48:57.745: INFO: busybox-scheduling-c04fb27e-04fa-435e-9676-e0b0704f1ec5 from kubelet-test-4554 started at 2020-08-28 04:48:53 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.745: INFO: 	Container busybox-scheduling-c04fb27e-04fa-435e-9676-e0b0704f1ec5 ready: true, restart count 0
Aug 28 04:48:57.745: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 28 04:48:57.783: INFO: labelsupdate0eefe9b9-3e31-4f9c-90eb-b634d7c87163 from downward-api-7216 started at 2020-08-28 04:48:40 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.783: INFO: 	Container client-container ready: true, restart count 0
Aug 28 04:48:57.783: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.783: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 04:48:57.783: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.783: INFO: 	Container httpd ready: true, restart count 0
Aug 28 04:48:57.783: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.783: INFO: 	Container app ready: true, restart count 0
Aug 28 04:48:57.783: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 04:48:57.783: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162f5542bac176ed], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162f5542bcdcf9bc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:48:58.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-650" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":179,"skipped":3008,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:48:58.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 28 04:48:59.032: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:49:06.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-586" for this suite.

• [SLOW TEST:7.650 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":180,"skipped":3017,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:49:06.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-cf8e432d-ea71-477f-a7fc-58cf51fdcc8b
STEP: Creating a pod to test consume secrets
Aug 28 04:49:06.634: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8b542938-bae4-4242-b4f7-9a265a9798a6" in namespace "projected-2736" to be "success or failure"
Aug 28 04:49:06.680: INFO: Pod "pod-projected-secrets-8b542938-bae4-4242-b4f7-9a265a9798a6": Phase="Pending", Reason="", readiness=false. Elapsed: 45.931101ms
Aug 28 04:49:09.012: INFO: Pod "pod-projected-secrets-8b542938-bae4-4242-b4f7-9a265a9798a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377277736s
Aug 28 04:49:11.017: INFO: Pod "pod-projected-secrets-8b542938-bae4-4242-b4f7-9a265a9798a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.383146993s
STEP: Saw pod success
Aug 28 04:49:11.018: INFO: Pod "pod-projected-secrets-8b542938-bae4-4242-b4f7-9a265a9798a6" satisfied condition "success or failure"
Aug 28 04:49:11.023: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8b542938-bae4-4242-b4f7-9a265a9798a6 container projected-secret-volume-test: 
STEP: delete the pod
Aug 28 04:49:11.123: INFO: Waiting for pod pod-projected-secrets-8b542938-bae4-4242-b4f7-9a265a9798a6 to disappear
Aug 28 04:49:11.140: INFO: Pod pod-projected-secrets-8b542938-bae4-4242-b4f7-9a265a9798a6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:49:11.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2736" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3038,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:49:11.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Aug 28 04:49:15.354: INFO: Pod pod-hostip-3f021a98-9dd9-44eb-8dfe-c0e13ccd7fe9 has hostIP: 172.18.0.3
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:49:15.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-324" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3045,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:49:15.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:49:15.493: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 28 04:49:20.615: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 28 04:49:20.616: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 28 04:49:22.623: INFO: Creating deployment "test-rollover-deployment"
Aug 28 04:49:22.645: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 28 04:49:24.896: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 28 04:49:24.905: INFO: Ensure that both replica sets have 1 created replica
Aug 28 04:49:24.914: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 28 04:49:24.922: INFO: Updating deployment test-rollover-deployment
Aug 28 04:49:24.922: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 28 04:49:26.938: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 28 04:49:26.950: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 28 04:49:26.964: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 04:49:26.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186965, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186962, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:49:28.980: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 04:49:28.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186968, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186962, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:49:30.994: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 04:49:30.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186968, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186962, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:49:32.981: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 04:49:32.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186968, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186962, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:49:34.978: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 04:49:34.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186968, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186962, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:49:36.977: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 04:49:36.977: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186968, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186962, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:49:39.571: INFO: 
Aug 28 04:49:39.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186963, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186978, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734186962, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:49:41.107: INFO: 
Aug 28 04:49:41.107: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 28 04:49:41.118: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-9959 /apis/apps/v1/namespaces/deployment-9959/deployments/test-rollover-deployment 50f7185c-1a74-4f59-8f1d-db020081fb9a 4493702 2 2020-08-28 04:49:22 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400287d528  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-28 04:49:23 +0000 UTC,LastTransitionTime:2020-08-28 04:49:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-28 04:49:39 +0000 UTC,LastTransitionTime:2020-08-28 04:49:22 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 28 04:49:41.124: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-9959 /apis/apps/v1/namespaces/deployment-9959/replicasets/test-rollover-deployment-574d6dfbff 2ca36f4b-e3d6-457f-a770-8ed6d7bdf4c2 4493689 2 2020-08-28 04:49:24 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 50f7185c-1a74-4f59-8f1d-db020081fb9a 0x400287dc87 0x400287dc88}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400287dd38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 28 04:49:41.124: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 28 04:49:41.124: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-9959 /apis/apps/v1/namespaces/deployment-9959/replicasets/test-rollover-controller f3b2d253-659d-41db-bfaa-65dfde042c03 4493700 2 2020-08-28 04:49:15 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 50f7185c-1a74-4f59-8f1d-db020081fb9a 0x400287db1f 0x400287db30}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x400287dbf8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 28 04:49:41.125: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-9959 /apis/apps/v1/namespaces/deployment-9959/replicasets/test-rollover-deployment-f6c94f66c 11229ba1-6784-4b66-96c0-a46cf267bc0f 4493637 2 2020-08-28 04:49:22 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 50f7185c-1a74-4f59-8f1d-db020081fb9a 0x400287ddd0 0x400287ddd1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400287de88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 28 04:49:41.133: INFO: Pod "test-rollover-deployment-574d6dfbff-kh9xb" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-kh9xb test-rollover-deployment-574d6dfbff- deployment-9959 /api/v1/namespaces/deployment-9959/pods/test-rollover-deployment-574d6dfbff-kh9xb f9e73122-a360-40ab-88fd-c6498e0415ef 4493655 0 2020-08-28 04:49:24 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 2ca36f4b-e3d6-457f-a770-8ed6d7bdf4c2 0x40039e6447 0x40039e6448}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-grxtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-grxtx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-grxtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:49:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:49:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:49:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:49:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.163,StartTime:2020-08-28 04:49:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 04:49:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://d8779510377cd1d0c00423a3d31c08fd0b037e2eb4f86aeef1bac620aa3fc0c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:49:41.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9959" for this suite.

• [SLOW TEST:25.774 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":183,"skipped":3098,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:49:41.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:49:59.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8700" for this suite.

• [SLOW TEST:17.868 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":184,"skipped":3103,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:49:59.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 28 04:49:59.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7e31058-7be3-4a3e-90c2-4b1f68e527bd" in namespace "projected-5484" to be "success or failure"
Aug 28 04:49:59.181: INFO: Pod "downwardapi-volume-b7e31058-7be3-4a3e-90c2-4b1f68e527bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.874041ms
Aug 28 04:50:01.688: INFO: Pod "downwardapi-volume-b7e31058-7be3-4a3e-90c2-4b1f68e527bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.51202428s
Aug 28 04:50:03.695: INFO: Pod "downwardapi-volume-b7e31058-7be3-4a3e-90c2-4b1f68e527bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.51941339s
STEP: Saw pod success
Aug 28 04:50:03.696: INFO: Pod "downwardapi-volume-b7e31058-7be3-4a3e-90c2-4b1f68e527bd" satisfied condition "success or failure"
Aug 28 04:50:03.711: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b7e31058-7be3-4a3e-90c2-4b1f68e527bd container client-container: 
STEP: delete the pod
Aug 28 04:50:03.958: INFO: Waiting for pod downwardapi-volume-b7e31058-7be3-4a3e-90c2-4b1f68e527bd to disappear
Aug 28 04:50:03.998: INFO: Pod downwardapi-volume-b7e31058-7be3-4a3e-90c2-4b1f68e527bd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:50:03.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5484" for this suite.

• [SLOW TEST:5.012 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3114,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:50:04.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:50:04.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1232" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3143,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:50:04.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:50:05.049: INFO: Create a RollingUpdate DaemonSet
Aug 28 04:50:05.054: INFO: Check that daemon pods launch on every node of the cluster
Aug 28 04:50:05.060: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:05.064: INFO: Number of nodes with available pods: 0
Aug 28 04:50:05.064: INFO: Node jerma-worker is running more than one daemon pod
Aug 28 04:50:06.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:06.247: INFO: Number of nodes with available pods: 0
Aug 28 04:50:06.247: INFO: Node jerma-worker is running more than one daemon pod
Aug 28 04:50:07.072: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:07.079: INFO: Number of nodes with available pods: 0
Aug 28 04:50:07.079: INFO: Node jerma-worker is running more than one daemon pod
Aug 28 04:50:08.075: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:08.083: INFO: Number of nodes with available pods: 0
Aug 28 04:50:08.083: INFO: Node jerma-worker is running more than one daemon pod
Aug 28 04:50:09.129: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:09.187: INFO: Number of nodes with available pods: 0
Aug 28 04:50:09.187: INFO: Node jerma-worker is running more than one daemon pod
Aug 28 04:50:10.114: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:10.343: INFO: Number of nodes with available pods: 1
Aug 28 04:50:10.343: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 28 04:50:11.075: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:11.081: INFO: Number of nodes with available pods: 2
Aug 28 04:50:11.081: INFO: Number of running nodes: 2, number of available pods: 2
Aug 28 04:50:11.081: INFO: Update the DaemonSet to trigger a rollout
Aug 28 04:50:11.131: INFO: Updating DaemonSet daemon-set
Aug 28 04:50:22.217: INFO: Roll back the DaemonSet before rollout is complete
Aug 28 04:50:22.226: INFO: Updating DaemonSet daemon-set
Aug 28 04:50:22.226: INFO: Make sure DaemonSet rollback is complete
Aug 28 04:50:22.239: INFO: Wrong image for pod: daemon-set-9vwms. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 04:50:22.239: INFO: Pod daemon-set-9vwms is not available
Aug 28 04:50:22.247: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:23.255: INFO: Wrong image for pod: daemon-set-9vwms. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 04:50:23.255: INFO: Pod daemon-set-9vwms is not available
Aug 28 04:50:23.264: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:24.314: INFO: Wrong image for pod: daemon-set-9vwms. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 04:50:24.314: INFO: Pod daemon-set-9vwms is not available
Aug 28 04:50:24.337: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:25.255: INFO: Wrong image for pod: daemon-set-9vwms. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 04:50:25.255: INFO: Pod daemon-set-9vwms is not available
Aug 28 04:50:25.263: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 04:50:26.254: INFO: Pod daemon-set-blkhv is not available
Aug 28 04:50:26.263: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3271, will wait for the garbage collector to delete the pods
Aug 28 04:50:26.334: INFO: Deleting DaemonSet.extensions daemon-set took: 6.93099ms
Aug 28 04:50:26.635: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.716507ms
Aug 28 04:50:41.641: INFO: Number of nodes with available pods: 0
Aug 28 04:50:41.642: INFO: Number of running nodes: 0, number of available pods: 0
Aug 28 04:50:41.646: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3271/daemonsets","resourceVersion":"4494052"},"items":null}

Aug 28 04:50:41.649: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3271/pods","resourceVersion":"4494052"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:50:41.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3271" for this suite.

• [SLOW TEST:37.321 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":187,"skipped":3227,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:50:41.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 28 04:50:51.864: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 04:50:51.896: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 04:50:53.897: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 04:50:53.904: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 04:50:55.897: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 04:50:55.904: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 04:50:57.897: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 04:50:57.904: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 04:50:59.897: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 04:50:59.905: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 04:51:01.897: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 04:51:01.903: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:51:01.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1772" for this suite.

• [SLOW TEST:20.247 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3236,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:51:01.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 28 04:51:10.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 28 04:51:10.192: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 28 04:51:12.193: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 28 04:51:12.199: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 28 04:51:14.193: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 28 04:51:14.214: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 28 04:51:16.193: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 28 04:51:16.199: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:51:16.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8745" for this suite.

• [SLOW TEST:14.305 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3249,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:51:16.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 28 04:51:16.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fbb45ea-b54b-4769-855f-dfd3c4850a66" in namespace "projected-7395" to be "success or failure"
Aug 28 04:51:16.367: INFO: Pod "downwardapi-volume-1fbb45ea-b54b-4769-855f-dfd3c4850a66": Phase="Pending", Reason="", readiness=false. Elapsed: 32.460021ms
Aug 28 04:51:18.425: INFO: Pod "downwardapi-volume-1fbb45ea-b54b-4769-855f-dfd3c4850a66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090983203s
Aug 28 04:51:20.432: INFO: Pod "downwardapi-volume-1fbb45ea-b54b-4769-855f-dfd3c4850a66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098082458s
STEP: Saw pod success
Aug 28 04:51:20.433: INFO: Pod "downwardapi-volume-1fbb45ea-b54b-4769-855f-dfd3c4850a66" satisfied condition "success or failure"
Aug 28 04:51:20.437: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1fbb45ea-b54b-4769-855f-dfd3c4850a66 container client-container: 
STEP: delete the pod
Aug 28 04:51:20.553: INFO: Waiting for pod downwardapi-volume-1fbb45ea-b54b-4769-855f-dfd3c4850a66 to disappear
Aug 28 04:51:20.625: INFO: Pod downwardapi-volume-1fbb45ea-b54b-4769-855f-dfd3c4850a66 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:51:20.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7395" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3254,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:51:20.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:51:20.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9234" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":191,"skipped":3286,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:51:20.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-cf8c8f09-7c03-4926-a4fd-2b82ac264e52
STEP: Creating a pod to test consume secrets
Aug 28 04:51:21.113: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7" in namespace "projected-1713" to be "success or failure"
Aug 28 04:51:21.150: INFO: Pod "pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.39625ms
Aug 28 04:51:23.157: INFO: Pod "pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04308143s
Aug 28 04:51:25.163: INFO: Pod "pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7": Phase="Running", Reason="", readiness=true. Elapsed: 4.049663653s
Aug 28 04:51:27.170: INFO: Pod "pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056507863s
STEP: Saw pod success
Aug 28 04:51:27.170: INFO: Pod "pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7" satisfied condition "success or failure"
Aug 28 04:51:27.175: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7 container projected-secret-volume-test: 
STEP: delete the pod
Aug 28 04:51:27.199: INFO: Waiting for pod pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7 to disappear
Aug 28 04:51:27.243: INFO: Pod pod-projected-secrets-99b65c44-ebe2-45a6-9e70-4cde909beef7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:51:27.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1713" for this suite.

• [SLOW TEST:6.388 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3309,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:51:27.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-75360061-9319-4564-a6ca-8ad95933746a
STEP: Creating a pod to test consume secrets
Aug 28 04:51:27.374: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-448c3eb9-3c49-4ceb-972e-d41968afa6b0" in namespace "projected-5677" to be "success or failure"
Aug 28 04:51:27.407: INFO: Pod "pod-projected-secrets-448c3eb9-3c49-4ceb-972e-d41968afa6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.008686ms
Aug 28 04:51:29.431: INFO: Pod "pod-projected-secrets-448c3eb9-3c49-4ceb-972e-d41968afa6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056993491s
Aug 28 04:51:31.437: INFO: Pod "pod-projected-secrets-448c3eb9-3c49-4ceb-972e-d41968afa6b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0635179s
STEP: Saw pod success
Aug 28 04:51:31.438: INFO: Pod "pod-projected-secrets-448c3eb9-3c49-4ceb-972e-d41968afa6b0" satisfied condition "success or failure"
Aug 28 04:51:31.442: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-448c3eb9-3c49-4ceb-972e-d41968afa6b0 container projected-secret-volume-test: 
STEP: delete the pod
Aug 28 04:51:31.694: INFO: Waiting for pod pod-projected-secrets-448c3eb9-3c49-4ceb-972e-d41968afa6b0 to disappear
Aug 28 04:51:31.715: INFO: Pod pod-projected-secrets-448c3eb9-3c49-4ceb-972e-d41968afa6b0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:51:31.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5677" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3310,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:51:31.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-qgpr
STEP: Creating a pod to test atomic-volume-subpath
Aug 28 04:51:31.836: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qgpr" in namespace "subpath-2590" to be "success or failure"
Aug 28 04:51:31.840: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Pending", Reason="", readiness=false. Elapsed: 3.909659ms
Aug 28 04:51:33.859: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022906756s
Aug 28 04:51:35.939: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 4.102842845s
Aug 28 04:51:38.060: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 6.223744213s
Aug 28 04:51:40.065: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 8.228934604s
Aug 28 04:51:42.072: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 10.235711478s
Aug 28 04:51:44.079: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 12.242619745s
Aug 28 04:51:46.085: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 14.248567502s
Aug 28 04:51:48.091: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 16.254637677s
Aug 28 04:51:50.097: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 18.260183022s
Aug 28 04:51:52.112: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 20.275202468s
Aug 28 04:51:54.118: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 22.281693948s
Aug 28 04:51:56.125: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Running", Reason="", readiness=true. Elapsed: 24.288709324s
Aug 28 04:51:58.132: INFO: Pod "pod-subpath-test-downwardapi-qgpr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.295636977s
STEP: Saw pod success
Aug 28 04:51:58.132: INFO: Pod "pod-subpath-test-downwardapi-qgpr" satisfied condition "success or failure"
Aug 28 04:51:58.137: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-qgpr container test-container-subpath-downwardapi-qgpr: 
STEP: delete the pod
Aug 28 04:51:58.164: INFO: Waiting for pod pod-subpath-test-downwardapi-qgpr to disappear
Aug 28 04:51:58.168: INFO: Pod pod-subpath-test-downwardapi-qgpr no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-qgpr
Aug 28 04:51:58.168: INFO: Deleting pod "pod-subpath-test-downwardapi-qgpr" in namespace "subpath-2590"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:51:58.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2590" for this suite.

• [SLOW TEST:26.470 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":194,"skipped":3332,"failed":0}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:51:58.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8944
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 28 04:51:58.290: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 28 04:52:22.560: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.173 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8944 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 04:52:22.561: INFO: >>> kubeConfig: /root/.kube/config
I0828 04:52:22.621774       8 log.go:172] (0x40028b00b0) (0x4001b46780) Create stream
I0828 04:52:22.621934       8 log.go:172] (0x40028b00b0) (0x4001b46780) Stream added, broadcasting: 1
I0828 04:52:22.625570       8 log.go:172] (0x40028b00b0) Reply frame received for 1
I0828 04:52:22.625806       8 log.go:172] (0x40028b00b0) (0x400129c3c0) Create stream
I0828 04:52:22.625893       8 log.go:172] (0x40028b00b0) (0x400129c3c0) Stream added, broadcasting: 3
I0828 04:52:22.627833       8 log.go:172] (0x40028b00b0) Reply frame received for 3
I0828 04:52:22.628074       8 log.go:172] (0x40028b00b0) (0x400129c5a0) Create stream
I0828 04:52:22.628185       8 log.go:172] (0x40028b00b0) (0x400129c5a0) Stream added, broadcasting: 5
I0828 04:52:22.630605       8 log.go:172] (0x40028b00b0) Reply frame received for 5
I0828 04:52:23.702387       8 log.go:172] (0x40028b00b0) Data frame received for 3
I0828 04:52:23.702599       8 log.go:172] (0x400129c3c0) (3) Data frame handling
I0828 04:52:23.702726       8 log.go:172] (0x400129c3c0) (3) Data frame sent
I0828 04:52:23.702865       8 log.go:172] (0x40028b00b0) Data frame received for 3
I0828 04:52:23.702957       8 log.go:172] (0x400129c3c0) (3) Data frame handling
I0828 04:52:23.703086       8 log.go:172] (0x40028b00b0) Data frame received for 5
I0828 04:52:23.703233       8 log.go:172] (0x400129c5a0) (5) Data frame handling
I0828 04:52:23.704091       8 log.go:172] (0x40028b00b0) Data frame received for 1
I0828 04:52:23.704202       8 log.go:172] (0x4001b46780) (1) Data frame handling
I0828 04:52:23.704300       8 log.go:172] (0x4001b46780) (1) Data frame sent
I0828 04:52:23.704409       8 log.go:172] (0x40028b00b0) (0x4001b46780) Stream removed, broadcasting: 1
I0828 04:52:23.704533       8 log.go:172] (0x40028b00b0) Go away received
I0828 04:52:23.704810       8 log.go:172] (0x40028b00b0) (0x4001b46780) Stream removed, broadcasting: 1
I0828 04:52:23.704896       8 log.go:172] (0x40028b00b0) (0x400129c3c0) Stream removed, broadcasting: 3
I0828 04:52:23.704958       8 log.go:172] (0x40028b00b0) (0x400129c5a0) Stream removed, broadcasting: 5
Aug 28 04:52:23.705: INFO: Found all expected endpoints: [netserver-0]
Aug 28 04:52:23.710: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.44 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8944 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 04:52:23.710: INFO: >>> kubeConfig: /root/.kube/config
I0828 04:52:23.767742       8 log.go:172] (0x40028b06e0) (0x4001b47040) Create stream
I0828 04:52:23.767873       8 log.go:172] (0x40028b06e0) (0x4001b47040) Stream added, broadcasting: 1
I0828 04:52:23.771238       8 log.go:172] (0x40028b06e0) Reply frame received for 1
I0828 04:52:23.771436       8 log.go:172] (0x40028b06e0) (0x400183c640) Create stream
I0828 04:52:23.771546       8 log.go:172] (0x40028b06e0) (0x400183c640) Stream added, broadcasting: 3
I0828 04:52:23.773411       8 log.go:172] (0x40028b06e0) Reply frame received for 3
I0828 04:52:23.773653       8 log.go:172] (0x40028b06e0) (0x400129c820) Create stream
I0828 04:52:23.773769       8 log.go:172] (0x40028b06e0) (0x400129c820) Stream added, broadcasting: 5
I0828 04:52:23.775611       8 log.go:172] (0x40028b06e0) Reply frame received for 5
I0828 04:52:24.845307       8 log.go:172] (0x40028b06e0) Data frame received for 3
I0828 04:52:24.845465       8 log.go:172] (0x40028b06e0) Data frame received for 5
I0828 04:52:24.845601       8 log.go:172] (0x400129c820) (5) Data frame handling
I0828 04:52:24.845785       8 log.go:172] (0x400183c640) (3) Data frame handling
I0828 04:52:24.845940       8 log.go:172] (0x400183c640) (3) Data frame sent
I0828 04:52:24.846042       8 log.go:172] (0x40028b06e0) Data frame received for 3
I0828 04:52:24.846128       8 log.go:172] (0x400183c640) (3) Data frame handling
I0828 04:52:24.847985       8 log.go:172] (0x40028b06e0) Data frame received for 1
I0828 04:52:24.848050       8 log.go:172] (0x4001b47040) (1) Data frame handling
I0828 04:52:24.848112       8 log.go:172] (0x4001b47040) (1) Data frame sent
I0828 04:52:24.848208       8 log.go:172] (0x40028b06e0) (0x4001b47040) Stream removed, broadcasting: 1
I0828 04:52:24.848688       8 log.go:172] (0x40028b06e0) Go away received
I0828 04:52:24.849080       8 log.go:172] (0x40028b06e0) (0x4001b47040) Stream removed, broadcasting: 1
I0828 04:52:24.849252       8 log.go:172] (0x40028b06e0) (0x400183c640) Stream removed, broadcasting: 3
I0828 04:52:24.849353       8 log.go:172] (0x40028b06e0) (0x400129c820) Stream removed, broadcasting: 5
Aug 28 04:52:24.849: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:52:24.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8944" for this suite.

• [SLOW TEST:26.675 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3333,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:52:24.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 04:52:29.714: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 04:52:32.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187149, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187149, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187150, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187149, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:52:34.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187149, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187149, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187150, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187149, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 04:52:38.025: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:52:50.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3461" for this suite.
STEP: Destroying namespace "webhook-3461-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:25.494 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":196,"skipped":3355,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:52:50.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-ad21b58d-b072-412e-bc1c-1c57dd8e38ed
STEP: Creating a pod to test consume configMaps
Aug 28 04:52:50.471: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5540b3f8-4ead-4b60-bad3-f3d431d9e78a" in namespace "projected-4890" to be "success or failure"
Aug 28 04:52:50.481: INFO: Pod "pod-projected-configmaps-5540b3f8-4ead-4b60-bad3-f3d431d9e78a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.904423ms
Aug 28 04:52:52.488: INFO: Pod "pod-projected-configmaps-5540b3f8-4ead-4b60-bad3-f3d431d9e78a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016439113s
Aug 28 04:52:54.494: INFO: Pod "pod-projected-configmaps-5540b3f8-4ead-4b60-bad3-f3d431d9e78a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022808517s
STEP: Saw pod success
Aug 28 04:52:54.494: INFO: Pod "pod-projected-configmaps-5540b3f8-4ead-4b60-bad3-f3d431d9e78a" satisfied condition "success or failure"
Aug 28 04:52:54.498: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-5540b3f8-4ead-4b60-bad3-f3d431d9e78a container projected-configmap-volume-test: 
STEP: delete the pod
Aug 28 04:52:54.585: INFO: Waiting for pod pod-projected-configmaps-5540b3f8-4ead-4b60-bad3-f3d431d9e78a to disappear
Aug 28 04:52:54.595: INFO: Pod pod-projected-configmaps-5540b3f8-4ead-4b60-bad3-f3d431d9e78a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:52:54.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4890" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3378,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:52:54.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Aug 28 04:52:55.237: INFO: Waiting up to 5m0s for pod "client-containers-842a3f58-ca08-4ede-b2c6-694090528dd0" in namespace "containers-6299" to be "success or failure"
Aug 28 04:52:55.307: INFO: Pod "client-containers-842a3f58-ca08-4ede-b2c6-694090528dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 70.36283ms
Aug 28 04:52:57.330: INFO: Pod "client-containers-842a3f58-ca08-4ede-b2c6-694090528dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092932388s
Aug 28 04:52:59.337: INFO: Pod "client-containers-842a3f58-ca08-4ede-b2c6-694090528dd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099900301s
STEP: Saw pod success
Aug 28 04:52:59.337: INFO: Pod "client-containers-842a3f58-ca08-4ede-b2c6-694090528dd0" satisfied condition "success or failure"
Aug 28 04:52:59.342: INFO: Trying to get logs from node jerma-worker pod client-containers-842a3f58-ca08-4ede-b2c6-694090528dd0 container test-container: 
STEP: delete the pod
Aug 28 04:52:59.365: INFO: Waiting for pod client-containers-842a3f58-ca08-4ede-b2c6-694090528dd0 to disappear
Aug 28 04:52:59.437: INFO: Pod client-containers-842a3f58-ca08-4ede-b2c6-694090528dd0 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:52:59.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6299" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3380,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:52:59.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-2cdf0008-3f32-4cb9-b142-9b23ff896137
STEP: Creating configMap with name cm-test-opt-upd-a59690a0-bd44-4d32-9cd4-937194ae6fc9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-2cdf0008-3f32-4cb9-b142-9b23ff896137
STEP: Updating configmap cm-test-opt-upd-a59690a0-bd44-4d32-9cd4-937194ae6fc9
STEP: Creating configMap with name cm-test-opt-create-ad15bcc4-ba8f-46ad-88d7-25b90727dafd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:10.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4751" for this suite.

• [SLOW TEST:10.836 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3409,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:10.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-91c193f5-b5e8-4f4c-ab4b-186537f88bf4
STEP: Creating a pod to test consume secrets
Aug 28 04:53:10.549: INFO: Waiting up to 5m0s for pod "pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab" in namespace "secrets-8140" to be "success or failure"
Aug 28 04:53:10.578: INFO: Pod "pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab": Phase="Pending", Reason="", readiness=false. Elapsed: 29.018642ms
Aug 28 04:53:12.593: INFO: Pod "pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04340119s
Aug 28 04:53:14.599: INFO: Pod "pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04915189s
Aug 28 04:53:16.609: INFO: Pod "pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059610033s
STEP: Saw pod success
Aug 28 04:53:16.609: INFO: Pod "pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab" satisfied condition "success or failure"
Aug 28 04:53:16.614: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab container secret-volume-test: 
STEP: delete the pod
Aug 28 04:53:16.693: INFO: Waiting for pod pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab to disappear
Aug 28 04:53:16.796: INFO: Pod pod-secrets-ec06ce31-f45e-44fc-b080-01d34cb2ccab no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:16.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8140" for this suite.

• [SLOW TEST:6.519 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3414,"failed":0}
SS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:16.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:17.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2389" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":201,"skipped":3416,"failed":0}

------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:17.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-a825f45f-ea7a-4f45-9896-aadca392db14
STEP: Creating a pod to test consume configMaps
Aug 28 04:53:17.372: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548" in namespace "configmap-9370" to be "success or failure"
Aug 28 04:53:17.393: INFO: Pod "pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548": Phase="Pending", Reason="", readiness=false. Elapsed: 21.224978ms
Aug 28 04:53:19.472: INFO: Pod "pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09999698s
Aug 28 04:53:21.478: INFO: Pod "pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105908531s
Aug 28 04:53:23.485: INFO: Pod "pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112926388s
STEP: Saw pod success
Aug 28 04:53:23.485: INFO: Pod "pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548" satisfied condition "success or failure"
Aug 28 04:53:23.490: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548 container configmap-volume-test: 
STEP: delete the pod
Aug 28 04:53:23.516: INFO: Waiting for pod pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548 to disappear
Aug 28 04:53:23.539: INFO: Pod pod-configmaps-7ce3ad90-c7a6-4133-9aa1-1c94d653a548 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:23.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9370" for this suite.

• [SLOW TEST:6.378 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3416,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:23.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:53:23.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:27.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-566" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3423,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:27.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 28 04:53:36.038: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 28 04:53:36.059: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 28 04:53:38.060: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 28 04:53:38.067: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 28 04:53:40.060: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 28 04:53:40.067: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:40.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5195" for this suite.

• [SLOW TEST:12.237 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3455,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:40.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-6046/configmap-test-2d232c8a-6051-4a03-b05a-88d07949e3b2
STEP: Creating a pod to test consume configMaps
Aug 28 04:53:40.194: INFO: Waiting up to 5m0s for pod "pod-configmaps-472abc36-249f-4abb-b137-882773a3dc43" in namespace "configmap-6046" to be "success or failure"
Aug 28 04:53:40.201: INFO: Pod "pod-configmaps-472abc36-249f-4abb-b137-882773a3dc43": Phase="Pending", Reason="", readiness=false. Elapsed: 7.392715ms
Aug 28 04:53:42.207: INFO: Pod "pod-configmaps-472abc36-249f-4abb-b137-882773a3dc43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013518529s
Aug 28 04:53:44.215: INFO: Pod "pod-configmaps-472abc36-249f-4abb-b137-882773a3dc43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020622248s
STEP: Saw pod success
Aug 28 04:53:44.215: INFO: Pod "pod-configmaps-472abc36-249f-4abb-b137-882773a3dc43" satisfied condition "success or failure"
Aug 28 04:53:44.220: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-472abc36-249f-4abb-b137-882773a3dc43 container env-test: 
STEP: delete the pod
Aug 28 04:53:44.398: INFO: Waiting for pod pod-configmaps-472abc36-249f-4abb-b137-882773a3dc43 to disappear
Aug 28 04:53:44.410: INFO: Pod pod-configmaps-472abc36-249f-4abb-b137-882773a3dc43 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:44.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6046" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3472,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:44.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 28 04:53:44.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f" in namespace "downward-api-5666" to be "success or failure"
Aug 28 04:53:44.575: INFO: Pod "downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.502751ms
Aug 28 04:53:46.749: INFO: Pod "downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19903364s
Aug 28 04:53:48.757: INFO: Pod "downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206503817s
Aug 28 04:53:50.765: INFO: Pod "downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.214453073s
STEP: Saw pod success
Aug 28 04:53:50.765: INFO: Pod "downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f" satisfied condition "success or failure"
Aug 28 04:53:50.769: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f container client-container: 
STEP: delete the pod
Aug 28 04:53:50.794: INFO: Waiting for pod downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f to disappear
Aug 28 04:53:50.803: INFO: Pod downwardapi-volume-90e28e8f-b2be-44b0-907a-67d01c5c903f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:50.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5666" for this suite.

• [SLOW TEST:6.395 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3477,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:50.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 28 04:53:50.902: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 28 04:53:50.937: INFO: Waiting for terminating namespaces to be deleted...
Aug 28 04:53:50.941: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 28 04:53:50.958: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.959: INFO: 	Container app ready: true, restart count 0
Aug 28 04:53:50.959: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.959: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 04:53:50.959: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.959: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 04:53:50.959: INFO: pod-exec-websocket-304b2f4e-94a6-4f2f-b5a5-e841d52b8f47 from pods-566 started at 2020-08-28 04:53:23 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.959: INFO: 	Container main ready: true, restart count 0
Aug 28 04:53:50.959: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 28 04:53:50.976: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.976: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 04:53:50.976: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.976: INFO: 	Container httpd ready: true, restart count 0
Aug 28 04:53:50.976: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.976: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 04:53:50.977: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.977: INFO: 	Container app ready: true, restart count 0
Aug 28 04:53:50.977: INFO: pod-handle-http-request from container-lifecycle-hook-5195 started at 2020-08-28 04:53:27 +0000 UTC (1 container statuses recorded)
Aug 28 04:53:50.977: INFO: 	Container pod-handle-http-request ready: false, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Aug 28 04:53:51.091: INFO: Pod pod-handle-http-request requesting resource cpu=0m on Node jerma-worker2
Aug 28 04:53:51.091: INFO: Pod daemon-set-4l8wc requesting resource cpu=0m on Node jerma-worker
Aug 28 04:53:51.091: INFO: Pod daemon-set-cxv46 requesting resource cpu=0m on Node jerma-worker2
Aug 28 04:53:51.091: INFO: Pod test-recreate-deployment-5f94c574ff-k4dkm requesting resource cpu=0m on Node jerma-worker2
Aug 28 04:53:51.091: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2
Aug 28 04:53:51.091: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker
Aug 28 04:53:51.091: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2
Aug 28 04:53:51.091: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker
Aug 28 04:53:51.091: INFO: Pod pod-exec-websocket-304b2f4e-94a6-4f2f-b5a5-e841d52b8f47 requesting resource cpu=0m on Node jerma-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug 28 04:53:51.092: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Aug 28 04:53:51.127: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-908063f3-ab48-4c12-b6df-721af1b66208.162f558704dbcf11], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5688/filler-pod-908063f3-ab48-4c12-b6df-721af1b66208 to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-908063f3-ab48-4c12-b6df-721af1b66208.162f5587542c65de], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-908063f3-ab48-4c12-b6df-721af1b66208.162f5587b9a17d4f], Reason = [Created], Message = [Created container filler-pod-908063f3-ab48-4c12-b6df-721af1b66208]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-908063f3-ab48-4c12-b6df-721af1b66208.162f5587e97f1001], Reason = [Started], Message = [Started container filler-pod-908063f3-ab48-4c12-b6df-721af1b66208]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dc1956c6-0466-445c-9d0b-42ff0ac1a0d2.162f5587072b8584], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5688/filler-pod-dc1956c6-0466-445c-9d0b-42ff0ac1a0d2 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dc1956c6-0466-445c-9d0b-42ff0ac1a0d2.162f5587a75c0fb2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dc1956c6-0466-445c-9d0b-42ff0ac1a0d2.162f558816370aeb], Reason = [Created], Message = [Created container filler-pod-dc1956c6-0466-445c-9d0b-42ff0ac1a0d2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dc1956c6-0466-445c-9d0b-42ff0ac1a0d2.162f558829f9289a], Reason = [Started], Message = [Started container filler-pod-dc1956c6-0466-445c-9d0b-42ff0ac1a0d2]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162f558871b687d5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:53:58.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5688" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:7.843 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":207,"skipped":3528,"failed":0}
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:53:58.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:54:02.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2575" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3531,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:54:02.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:54:02.953: INFO: Creating deployment "test-recreate-deployment"
Aug 28 04:54:02.961: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 28 04:54:03.015: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 28 04:54:05.571: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 28 04:54:05.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187243, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187243, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187243, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187242, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:54:07.603: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 28 04:54:07.614: INFO: Updating deployment test-recreate-deployment
Aug 28 04:54:07.614: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 28 04:54:08.509: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-9392 /apis/apps/v1/namespaces/deployment-9392/deployments/test-recreate-deployment b8f58ecf-3acb-4874-b3e3-15fbb6bb0186 4495384 2 2020-08-28 04:54:02 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003a6db68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-28 04:54:08 +0000 UTC,LastTransitionTime:2020-08-28 04:54:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-28 04:54:08 +0000 UTC,LastTransitionTime:2020-08-28 04:54:02 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 28 04:54:08.561: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-9392 /apis/apps/v1/namespaces/deployment-9392/replicasets/test-recreate-deployment-5f94c574ff 1494d285-bd1c-419f-8d6b-8e06c5fea1c4 4495383 1 2020-08-28 04:54:07 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment b8f58ecf-3acb-4874-b3e3-15fbb6bb0186 0x4000dee917 0x4000dee918}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4000dee978  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 28 04:54:08.561: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 28 04:54:08.562: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-9392 /apis/apps/v1/namespaces/deployment-9392/replicasets/test-recreate-deployment-799c574856 ee547dac-36d3-4ea1-8ffa-814dbc88377d 4495373 2 2020-08-28 04:54:02 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment b8f58ecf-3acb-4874-b3e3-15fbb6bb0186 0x4000dee9e7 0x4000dee9e8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4000deea58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 28 04:54:08.570: INFO: Pod "test-recreate-deployment-5f94c574ff-vqmmw" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-vqmmw test-recreate-deployment-5f94c574ff- deployment-9392 /api/v1/namespaces/deployment-9392/pods/test-recreate-deployment-5f94c574ff-vqmmw 4708af08-12bd-48d0-b9b5-3712dcf7e6a9 4495385 0 2020-08-28 04:54:07 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 1494d285-bd1c-419f-8d6b-8e06c5fea1c4 0x4000deeee7 0x4000deeee8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lxx4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lxx4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lxx4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:54:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:54:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:54:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 04:54:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-28 04:54:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:54:08.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9392" for this suite.

• [SLOW TEST:5.764 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":209,"skipped":3552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:54:08.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:54:08.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 28 04:54:27.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2882 create -f -'
Aug 28 04:54:38.873: INFO: stderr: ""
Aug 28 04:54:38.874: INFO: stdout: "e2e-test-crd-publish-openapi-193-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 28 04:54:38.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2882 delete e2e-test-crd-publish-openapi-193-crds test-foo'
Aug 28 04:54:40.876: INFO: stderr: ""
Aug 28 04:54:40.876: INFO: stdout: "e2e-test-crd-publish-openapi-193-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 28 04:54:40.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2882 apply -f -'
Aug 28 04:54:43.381: INFO: stderr: ""
Aug 28 04:54:43.381: INFO: stdout: "e2e-test-crd-publish-openapi-193-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 28 04:54:43.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2882 delete e2e-test-crd-publish-openapi-193-crds test-foo'
Aug 28 04:54:44.661: INFO: stderr: ""
Aug 28 04:54:44.661: INFO: stdout: "e2e-test-crd-publish-openapi-193-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 28 04:54:44.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2882 create -f -'
Aug 28 04:54:46.275: INFO: rc: 1
Aug 28 04:54:46.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2882 apply -f -'
Aug 28 04:54:47.870: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 28 04:54:47.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2882 create -f -'
Aug 28 04:54:50.112: INFO: rc: 1
Aug 28 04:54:50.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2882 apply -f -'
Aug 28 04:54:52.032: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 28 04:54:52.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-193-crds'
Aug 28 04:54:54.188: INFO: stderr: ""
Aug 28 04:54:54.188: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-193-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 28 04:54:54.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-193-crds.metadata'
Aug 28 04:54:56.079: INFO: stderr: ""
Aug 28 04:54:56.079: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-193-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 28 04:54:56.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-193-crds.spec'
Aug 28 04:54:58.430: INFO: stderr: ""
Aug 28 04:54:58.430: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-193-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 28 04:54:58.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-193-crds.spec.bars'
Aug 28 04:55:00.736: INFO: stderr: ""
Aug 28 04:55:00.736: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-193-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 28 04:55:00.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-193-crds.spec.bars2'
Aug 28 04:55:02.999: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:55:22.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2882" for this suite.

• [SLOW TEST:74.255 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":210,"skipped":3585,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:55:22.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 28 04:55:27.014: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:55:27.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3661" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3585,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:55:27.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 28 04:55:27.275: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e498b30-09b9-4d8f-8a2f-89e89f6bb0c6" in namespace "projected-3175" to be "success or failure"
Aug 28 04:55:27.306: INFO: Pod "downwardapi-volume-8e498b30-09b9-4d8f-8a2f-89e89f6bb0c6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.957273ms
Aug 28 04:55:29.312: INFO: Pod "downwardapi-volume-8e498b30-09b9-4d8f-8a2f-89e89f6bb0c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036902573s
Aug 28 04:55:31.323: INFO: Pod "downwardapi-volume-8e498b30-09b9-4d8f-8a2f-89e89f6bb0c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047437756s
STEP: Saw pod success
Aug 28 04:55:31.323: INFO: Pod "downwardapi-volume-8e498b30-09b9-4d8f-8a2f-89e89f6bb0c6" satisfied condition "success or failure"
Aug 28 04:55:31.329: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8e498b30-09b9-4d8f-8a2f-89e89f6bb0c6 container client-container: 
STEP: delete the pod
Aug 28 04:55:31.510: INFO: Waiting for pod downwardapi-volume-8e498b30-09b9-4d8f-8a2f-89e89f6bb0c6 to disappear
Aug 28 04:55:31.515: INFO: Pod downwardapi-volume-8e498b30-09b9-4d8f-8a2f-89e89f6bb0c6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:55:31.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3175" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3624,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:55:31.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 04:55:36.644: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 04:55:38.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187336, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187336, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187336, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187336, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 04:55:41.936: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:55:41.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8562-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:55:44.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1339" for this suite.
STEP: Destroying namespace "webhook-1339-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.934 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":213,"skipped":3646,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:55:44.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-9d6786b4-c79e-4ded-9111-5f1202b8476b
STEP: Creating a pod to test consume secrets
Aug 28 04:55:44.566: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45752b41-14a4-4925-9d75-9fd71b9cfedf" in namespace "projected-4762" to be "success or failure"
Aug 28 04:55:44.577: INFO: Pod "pod-projected-secrets-45752b41-14a4-4925-9d75-9fd71b9cfedf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091071ms
Aug 28 04:55:46.595: INFO: Pod "pod-projected-secrets-45752b41-14a4-4925-9d75-9fd71b9cfedf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027948579s
Aug 28 04:55:48.602: INFO: Pod "pod-projected-secrets-45752b41-14a4-4925-9d75-9fd71b9cfedf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035563018s
STEP: Saw pod success
Aug 28 04:55:48.603: INFO: Pod "pod-projected-secrets-45752b41-14a4-4925-9d75-9fd71b9cfedf" satisfied condition "success or failure"
Aug 28 04:55:48.608: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-45752b41-14a4-4925-9d75-9fd71b9cfedf container secret-volume-test: 
STEP: delete the pod
Aug 28 04:55:48.647: INFO: Waiting for pod pod-projected-secrets-45752b41-14a4-4925-9d75-9fd71b9cfedf to disappear
Aug 28 04:55:48.652: INFO: Pod pod-projected-secrets-45752b41-14a4-4925-9d75-9fd71b9cfedf no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:55:48.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4762" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3647,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:55:48.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 04:55:51.738: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 04:55:53.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187351, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187351, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187351, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187351, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 04:55:55.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187351, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187351, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187351, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734187351, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 04:55:58.861: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:55:58.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:55:59.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-201" for this suite.
STEP: Destroying namespace "webhook-201-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.467 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":215,"skipped":3654,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:56:00.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:56:00.238: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:56:00.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 28 04:56:00.982: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T04:56:00Z generation:1 name:name1 resourceVersion:4495989 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:af8b6deb-c7fc-4fae-ad4c-c7fc3bc7551e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 28 04:56:11.009: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T04:56:10Z generation:1 name:name2 resourceVersion:4496037 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a79ecce3-f2e7-4db7-86fa-233d45619361] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 28 04:56:21.019: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T04:56:00Z generation:2 name:name1 resourceVersion:4496066 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:af8b6deb-c7fc-4fae-ad4c-c7fc3bc7551e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 28 04:56:31.028: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T04:56:10Z generation:2 name:name2 resourceVersion:4496096 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a79ecce3-f2e7-4db7-86fa-233d45619361] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 28 04:56:41.038: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T04:56:00Z generation:2 name:name1 resourceVersion:4496126 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:af8b6deb-c7fc-4fae-ad4c-c7fc3bc7551e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 28 04:56:51.580: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T04:56:10Z generation:2 name:name2 resourceVersion:4496156 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:a79ecce3-f2e7-4db7-86fa-233d45619361] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:57:02.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-7046" for this suite.

• [SLOW TEST:62.535 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":217,"skipped":3688,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:57:02.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:58:04.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4310" for this suite.

• [SLOW TEST:61.230 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3707,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:58:04.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 28 04:58:04.355: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-9186 /api/v1/namespaces/watch-9186/configmaps/e2e-watch-test-resource-version ae22c040-78fb-4d57-8a02-fb5eb382e9bd 4496393 0 2020-08-28 04:58:04 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 28 04:58:04.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-9186 /api/v1/namespaces/watch-9186/configmaps/e2e-watch-test-resource-version ae22c040-78fb-4d57-8a02-fb5eb382e9bd 4496394 0 2020-08-28 04:58:04 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:58:04.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9186" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":219,"skipped":3746,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:58:04.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:58:18.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-114" for this suite.

• [SLOW TEST:14.106 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":220,"skipped":3747,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:58:18.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 04:58:18.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 28 04:58:37.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2323 create -f -'
Aug 28 04:58:41.729: INFO: stderr: ""
Aug 28 04:58:41.729: INFO: stdout: "e2e-test-crd-publish-openapi-5488-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 28 04:58:41.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2323 delete e2e-test-crd-publish-openapi-5488-crds test-cr'
Aug 28 04:58:43.006: INFO: stderr: ""
Aug 28 04:58:43.006: INFO: stdout: "e2e-test-crd-publish-openapi-5488-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 28 04:58:43.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2323 apply -f -'
Aug 28 04:58:44.600: INFO: stderr: ""
Aug 28 04:58:44.601: INFO: stdout: "e2e-test-crd-publish-openapi-5488-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 28 04:58:44.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2323 delete e2e-test-crd-publish-openapi-5488-crds test-cr'
Aug 28 04:58:45.894: INFO: stderr: ""
Aug 28 04:58:45.894: INFO: stdout: "e2e-test-crd-publish-openapi-5488-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 28 04:58:45.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5488-crds'
Aug 28 04:58:47.449: INFO: stderr: ""
Aug 28 04:58:47.449: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5488-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:59:06.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2323" for this suite.

• [SLOW TEST:47.821 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":221,"skipped":3747,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:59:06.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:59:10.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3543" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3752,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:59:10.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8948.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8948.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8948.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8948.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8948.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8948.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 124.189.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.189.124_udp@PTR;check="$$(dig +tcp +noall +answer +search 124.189.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.189.124_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8948.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8948.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8948.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8948.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8948.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8948.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8948.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8948.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8948.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 124.189.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.189.124_udp@PTR;check="$$(dig +tcp +noall +answer +search 124.189.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.189.124_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 04:59:16.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:16.677: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:16.681: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:16.684: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:16.707: INFO: Unable to read jessie_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:16.710: INFO: Unable to read jessie_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:16.713: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:16.716: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:16.763: INFO: Lookups using dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b failed for: [wheezy_udp@dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_udp@dns-test-service.dns-8948.svc.cluster.local jessie_tcp@dns-test-service.dns-8948.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local]

Aug 28 04:59:21.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:21.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:21.779: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:21.782: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:21.810: INFO: Unable to read jessie_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:21.814: INFO: Unable to read jessie_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:21.819: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:21.823: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:21.864: INFO: Lookups using dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b failed for: [wheezy_udp@dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_udp@dns-test-service.dns-8948.svc.cluster.local jessie_tcp@dns-test-service.dns-8948.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local]

Aug 28 04:59:26.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:26.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:26.779: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:26.782: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:26.809: INFO: Unable to read jessie_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:26.812: INFO: Unable to read jessie_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:26.815: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:26.819: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:26.841: INFO: Lookups using dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b failed for: [wheezy_udp@dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_udp@dns-test-service.dns-8948.svc.cluster.local jessie_tcp@dns-test-service.dns-8948.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local]

Aug 28 04:59:31.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:31.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:31.782: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:31.785: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:31.810: INFO: Unable to read jessie_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:31.814: INFO: Unable to read jessie_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:31.817: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:31.822: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:31.856: INFO: Lookups using dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b failed for: [wheezy_udp@dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_udp@dns-test-service.dns-8948.svc.cluster.local jessie_tcp@dns-test-service.dns-8948.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local]

Aug 28 04:59:36.771: INFO: Unable to read wheezy_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:36.776: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:36.781: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:36.792: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:36.831: INFO: Unable to read jessie_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:36.834: INFO: Unable to read jessie_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:36.838: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:36.842: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:36.874: INFO: Lookups using dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b failed for: [wheezy_udp@dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_udp@dns-test-service.dns-8948.svc.cluster.local jessie_tcp@dns-test-service.dns-8948.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local]

Aug 28 04:59:41.771: INFO: Unable to read wheezy_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:41.776: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:41.780: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:41.784: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:41.807: INFO: Unable to read jessie_udp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:41.811: INFO: Unable to read jessie_tcp@dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:41.815: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:41.819: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local from pod dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b: the server could not find the requested resource (get pods dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b)
Aug 28 04:59:41.843: INFO: Lookups using dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b failed for: [wheezy_udp@dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@dns-test-service.dns-8948.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_udp@dns-test-service.dns-8948.svc.cluster.local jessie_tcp@dns-test-service.dns-8948.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8948.svc.cluster.local]

Aug 28 04:59:46.838: INFO: DNS probes using dns-8948/dns-test-a5dad0a3-45a7-4a52-8fb8-ed79b3e8301b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 04:59:47.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8948" for this suite.

• [SLOW TEST:36.957 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":223,"skipped":3757,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 04:59:47.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 28 04:59:47.592: INFO: namespace kubectl-5456
Aug 28 04:59:47.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5456'
Aug 28 04:59:49.580: INFO: stderr: ""
Aug 28 04:59:49.580: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 28 04:59:50.589: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 04:59:50.589: INFO: Found 0 / 1
Aug 28 04:59:51.699: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 04:59:51.700: INFO: Found 0 / 1
Aug 28 04:59:52.589: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 04:59:52.590: INFO: Found 0 / 1
Aug 28 04:59:53.589: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 04:59:53.589: INFO: Found 1 / 1
Aug 28 04:59:53.589: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 28 04:59:53.595: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 04:59:53.595: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 28 04:59:53.595: INFO: wait on agnhost-master startup in kubectl-5456 
Aug 28 04:59:53.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-xd52l agnhost-master --namespace=kubectl-5456'
Aug 28 04:59:54.896: INFO: stderr: ""
Aug 28 04:59:54.896: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 28 04:59:54.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5456'
Aug 28 04:59:56.323: INFO: stderr: ""
Aug 28 04:59:56.323: INFO: stdout: "service/rm2 exposed\n"
Aug 28 04:59:56.381: INFO: Service rm2 in namespace kubectl-5456 found.
STEP: exposing service
Aug 28 04:59:58.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5456'
Aug 28 04:59:59.784: INFO: stderr: ""
Aug 28 04:59:59.784: INFO: stdout: "service/rm3 exposed\n"
Aug 28 04:59:59.806: INFO: Service rm3 in namespace kubectl-5456 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:00:01.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5456" for this suite.

• [SLOW TEST:14.405 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":224,"skipped":3776,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:00:01.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 28 05:00:02.644: INFO: Pod name wrapped-volume-race-e830dbc8-ba38-4de8-934e-db43b1ce6577: Found 0 pods out of 5
Aug 28 05:00:07.935: INFO: Pod name wrapped-volume-race-e830dbc8-ba38-4de8-934e-db43b1ce6577: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e830dbc8-ba38-4de8-934e-db43b1ce6577 in namespace emptydir-wrapper-2594, will wait for the garbage collector to delete the pods
Aug 28 05:00:25.211: INFO: Deleting ReplicationController wrapped-volume-race-e830dbc8-ba38-4de8-934e-db43b1ce6577 took: 9.666293ms
Aug 28 05:00:26.312: INFO: Terminating ReplicationController wrapped-volume-race-e830dbc8-ba38-4de8-934e-db43b1ce6577 pods took: 1.101028421s
STEP: Creating RC which spawns configmap-volume pods
Aug 28 05:00:43.932: INFO: Pod name wrapped-volume-race-76fc37b7-fdae-46ae-af73-00845b095da4: Found 0 pods out of 5
Aug 28 05:00:49.267: INFO: Pod name wrapped-volume-race-76fc37b7-fdae-46ae-af73-00845b095da4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-76fc37b7-fdae-46ae-af73-00845b095da4 in namespace emptydir-wrapper-2594, will wait for the garbage collector to delete the pods
Aug 28 05:01:11.589: INFO: Deleting ReplicationController wrapped-volume-race-76fc37b7-fdae-46ae-af73-00845b095da4 took: 332.073815ms
Aug 28 05:01:12.789: INFO: Terminating ReplicationController wrapped-volume-race-76fc37b7-fdae-46ae-af73-00845b095da4 pods took: 1.200732134s
STEP: Creating RC which spawns configmap-volume pods
Aug 28 05:01:32.268: INFO: Pod name wrapped-volume-race-dbf5519a-4ebc-446d-ae3d-12547fa10829: Found 0 pods out of 5
Aug 28 05:01:37.284: INFO: Pod name wrapped-volume-race-dbf5519a-4ebc-446d-ae3d-12547fa10829: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-dbf5519a-4ebc-446d-ae3d-12547fa10829 in namespace emptydir-wrapper-2594, will wait for the garbage collector to delete the pods
Aug 28 05:01:53.454: INFO: Deleting ReplicationController wrapped-volume-race-dbf5519a-4ebc-446d-ae3d-12547fa10829 took: 8.406664ms
Aug 28 05:01:53.855: INFO: Terminating ReplicationController wrapped-volume-race-dbf5519a-4ebc-446d-ae3d-12547fa10829 pods took: 400.719633ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:02:23.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2594" for this suite.

• [SLOW TEST:142.586 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":225,"skipped":3802,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:02:24.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-de00dcb5-2213-4e62-ad2a-a0b35121315a in namespace container-probe-6634
Aug 28 05:02:30.850: INFO: Started pod busybox-de00dcb5-2213-4e62-ad2a-a0b35121315a in namespace container-probe-6634
STEP: checking the pod's current state and verifying that restartCount is present
Aug 28 05:02:31.120: INFO: Initial restart count of pod busybox-de00dcb5-2213-4e62-ad2a-a0b35121315a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:06:31.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6634" for this suite.

• [SLOW TEST:247.380 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3824,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:06:31.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Aug 28 05:06:31.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 28 05:06:33.322: INFO: stderr: ""
Aug 28 05:06:33.322: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:06:33.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4035" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":227,"skipped":3825,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:06:33.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 05:06:34.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 28 05:06:36.146: INFO: stderr: ""
Aug 28 05:06:36.147: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:06:36.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5570" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":228,"skipped":3849,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:06:36.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-8542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8542 to expose endpoints map[]
Aug 28 05:06:38.390: INFO: successfully validated that service endpoint-test2 in namespace services-8542 exposes endpoints map[] (208.376711ms elapsed)
STEP: Creating pod pod1 in namespace services-8542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8542 to expose endpoints map[pod1:[80]]
Aug 28 05:06:43.384: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.954501457s elapsed, will retry)
Aug 28 05:06:44.394: INFO: successfully validated that service endpoint-test2 in namespace services-8542 exposes endpoints map[pod1:[80]] (5.965028809s elapsed)
STEP: Creating pod pod2 in namespace services-8542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8542 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 28 05:06:48.737: INFO: successfully validated that service endpoint-test2 in namespace services-8542 exposes endpoints map[pod1:[80] pod2:[80]] (4.336146025s elapsed)
STEP: Deleting pod pod1 in namespace services-8542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8542 to expose endpoints map[pod2:[80]]
Aug 28 05:06:48.824: INFO: successfully validated that service endpoint-test2 in namespace services-8542 exposes endpoints map[pod2:[80]] (80.429778ms elapsed)
STEP: Deleting pod pod2 in namespace services-8542
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8542 to expose endpoints map[]
Aug 28 05:06:48.843: INFO: successfully validated that service endpoint-test2 in namespace services-8542 exposes endpoints map[] (7.773671ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:06:48.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8542" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.385 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":229,"skipped":3881,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:06:48.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 05:06:49.686: INFO: Creating ReplicaSet my-hostname-basic-f6a6daf6-9e7e-4327-9606-63e0c1cf3ca2
Aug 28 05:06:49.934: INFO: Pod name my-hostname-basic-f6a6daf6-9e7e-4327-9606-63e0c1cf3ca2: Found 0 pods out of 1
Aug 28 05:06:54.940: INFO: Pod name my-hostname-basic-f6a6daf6-9e7e-4327-9606-63e0c1cf3ca2: Found 1 pods out of 1
Aug 28 05:06:54.941: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f6a6daf6-9e7e-4327-9606-63e0c1cf3ca2" is running
Aug 28 05:06:54.945: INFO: Pod "my-hostname-basic-f6a6daf6-9e7e-4327-9606-63e0c1cf3ca2-v85l5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 05:06:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 05:06:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 05:06:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 05:06:49 +0000 UTC Reason: Message:}])
Aug 28 05:06:54.945: INFO: Trying to dial the pod
Aug 28 05:06:59.965: INFO: Controller my-hostname-basic-f6a6daf6-9e7e-4327-9606-63e0c1cf3ca2: Got expected result from replica 1 [my-hostname-basic-f6a6daf6-9e7e-4327-9606-63e0c1cf3ca2-v85l5]: "my-hostname-basic-f6a6daf6-9e7e-4327-9606-63e0c1cf3ca2-v85l5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:06:59.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3636" for this suite.

• [SLOW TEST:11.058 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":230,"skipped":3886,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:06:59.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 05:07:00.202: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2bfdd76f-f77f-4eeb-8f02-3e03a7eb805b" in namespace "security-context-test-3314" to be "success or failure"
Aug 28 05:07:00.246: INFO: Pod "alpine-nnp-false-2bfdd76f-f77f-4eeb-8f02-3e03a7eb805b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.101142ms
Aug 28 05:07:02.283: INFO: Pod "alpine-nnp-false-2bfdd76f-f77f-4eeb-8f02-3e03a7eb805b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081024989s
Aug 28 05:07:04.289: INFO: Pod "alpine-nnp-false-2bfdd76f-f77f-4eeb-8f02-3e03a7eb805b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08703656s
Aug 28 05:07:06.349: INFO: Pod "alpine-nnp-false-2bfdd76f-f77f-4eeb-8f02-3e03a7eb805b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146364614s
Aug 28 05:07:06.349: INFO: Pod "alpine-nnp-false-2bfdd76f-f77f-4eeb-8f02-3e03a7eb805b" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:07:06.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3314" for this suite.

• [SLOW TEST:6.526 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3894,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:07:06.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:07:13.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-443" for this suite.

• [SLOW TEST:7.262 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":232,"skipped":3918,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:07:13.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-329
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-329
STEP: creating replication controller externalsvc in namespace services-329
I0828 05:07:14.136006       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-329, replica count: 2
I0828 05:07:17.189110       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 05:07:20.189902       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 28 05:07:20.311: INFO: Creating new exec pod
Aug 28 05:07:24.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-329 execpod88744 -- /bin/sh -x -c nslookup nodeport-service'
Aug 28 05:07:25.885: INFO: stderr: "I0828 05:07:25.739140    4142 log.go:172] (0x40009e4bb0) (0x40006e61e0) Create stream\nI0828 05:07:25.747345    4142 log.go:172] (0x40009e4bb0) (0x40006e61e0) Stream added, broadcasting: 1\nI0828 05:07:25.758727    4142 log.go:172] (0x40009e4bb0) Reply frame received for 1\nI0828 05:07:25.759373    4142 log.go:172] (0x40009e4bb0) (0x40007c8000) Create stream\nI0828 05:07:25.759440    4142 log.go:172] (0x40009e4bb0) (0x40007c8000) Stream added, broadcasting: 3\nI0828 05:07:25.762169    4142 log.go:172] (0x40009e4bb0) Reply frame received for 3\nI0828 05:07:25.762397    4142 log.go:172] (0x40009e4bb0) (0x40006e6280) Create stream\nI0828 05:07:25.762450    4142 log.go:172] (0x40009e4bb0) (0x40006e6280) Stream added, broadcasting: 5\nI0828 05:07:25.763833    4142 log.go:172] (0x40009e4bb0) Reply frame received for 5\nI0828 05:07:25.848060    4142 log.go:172] (0x40009e4bb0) Data frame received for 5\nI0828 05:07:25.848273    4142 log.go:172] (0x40006e6280) (5) Data frame handling\nI0828 05:07:25.848660    4142 log.go:172] (0x40006e6280) (5) Data frame sent\n+ nslookup nodeport-service\nI0828 05:07:25.853229    4142 log.go:172] (0x40009e4bb0) Data frame received for 3\nI0828 05:07:25.853348    4142 log.go:172] (0x40007c8000) (3) Data frame handling\nI0828 05:07:25.853441    4142 log.go:172] (0x40007c8000) (3) Data frame sent\nI0828 05:07:25.853857    4142 log.go:172] (0x40009e4bb0) Data frame received for 3\nI0828 05:07:25.853955    4142 log.go:172] (0x40007c8000) (3) Data frame handling\nI0828 05:07:25.854071    4142 log.go:172] (0x40007c8000) (3) Data frame sent\nI0828 05:07:25.854167    4142 log.go:172] (0x40009e4bb0) Data frame received for 5\nI0828 05:07:25.854250    4142 log.go:172] (0x40006e6280) (5) Data frame handling\nI0828 05:07:25.854439    4142 log.go:172] (0x40009e4bb0) Data frame received for 3\nI0828 05:07:25.854544    4142 log.go:172] (0x40007c8000) (3) Data frame handling\nI0828 05:07:25.856219    4142 log.go:172] (0x40009e4bb0) Data frame received for 1\nI0828 05:07:25.856290    4142 log.go:172] (0x40006e61e0) (1) Data frame handling\nI0828 05:07:25.856368    4142 log.go:172] (0x40006e61e0) (1) Data frame sent\nI0828 05:07:25.857261    4142 log.go:172] (0x40009e4bb0) (0x40006e61e0) Stream removed, broadcasting: 1\nI0828 05:07:25.862172    4142 log.go:172] (0x40009e4bb0) Go away received\nI0828 05:07:25.871234    4142 log.go:172] (0x40009e4bb0) (0x40006e61e0) Stream removed, broadcasting: 1\nI0828 05:07:25.871916    4142 log.go:172] (0x40009e4bb0) (0x40007c8000) Stream removed, broadcasting: 3\nI0828 05:07:25.872461    4142 log.go:172] (0x40009e4bb0) (0x40006e6280) Stream removed, broadcasting: 5\n"
Aug 28 05:07:25.885: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-329.svc.cluster.local\tcanonical name = externalsvc.services-329.svc.cluster.local.\nName:\texternalsvc.services-329.svc.cluster.local\nAddress: 10.102.250.167\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-329, will wait for the garbage collector to delete the pods
Aug 28 05:07:25.957: INFO: Deleting ReplicationController externalsvc took: 7.028528ms
Aug 28 05:07:26.258: INFO: Terminating ReplicationController externalsvc pods took: 300.760679ms
Aug 28 05:07:41.942: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:07:42.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-329" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:28.320 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":233,"skipped":3943,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:07:42.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 28 05:07:42.266: INFO: Waiting up to 5m0s for pod "pod-c197af22-e6d2-4621-90c4-17a37313d1af" in namespace "emptydir-3764" to be "success or failure"
Aug 28 05:07:42.271: INFO: Pod "pod-c197af22-e6d2-4621-90c4-17a37313d1af": Phase="Pending", Reason="", readiness=false. Elapsed: 5.18188ms
Aug 28 05:07:44.391: INFO: Pod "pod-c197af22-e6d2-4621-90c4-17a37313d1af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125316956s
Aug 28 05:07:46.398: INFO: Pod "pod-c197af22-e6d2-4621-90c4-17a37313d1af": Phase="Running", Reason="", readiness=true. Elapsed: 4.132598547s
Aug 28 05:07:48.404: INFO: Pod "pod-c197af22-e6d2-4621-90c4-17a37313d1af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137931694s
STEP: Saw pod success
Aug 28 05:07:48.404: INFO: Pod "pod-c197af22-e6d2-4621-90c4-17a37313d1af" satisfied condition "success or failure"
Aug 28 05:07:48.408: INFO: Trying to get logs from node jerma-worker2 pod pod-c197af22-e6d2-4621-90c4-17a37313d1af container test-container: 
STEP: delete the pod
Aug 28 05:07:48.486: INFO: Waiting for pod pod-c197af22-e6d2-4621-90c4-17a37313d1af to disappear
Aug 28 05:07:48.502: INFO: Pod pod-c197af22-e6d2-4621-90c4-17a37313d1af no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:07:48.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3764" for this suite.

• [SLOW TEST:6.419 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3955,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:07:48.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 28 05:07:53.266: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f2a98f84-8ba3-49b9-80c7-62b90e87e210"
Aug 28 05:07:53.266: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f2a98f84-8ba3-49b9-80c7-62b90e87e210" in namespace "pods-5616" to be "terminated due to deadline exceeded"
Aug 28 05:07:53.529: INFO: Pod "pod-update-activedeadlineseconds-f2a98f84-8ba3-49b9-80c7-62b90e87e210": Phase="Running", Reason="", readiness=true. Elapsed: 262.740632ms
Aug 28 05:07:55.770: INFO: Pod "pod-update-activedeadlineseconds-f2a98f84-8ba3-49b9-80c7-62b90e87e210": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.503635143s
Aug 28 05:07:55.771: INFO: Pod "pod-update-activedeadlineseconds-f2a98f84-8ba3-49b9-80c7-62b90e87e210" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:07:55.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5616" for this suite.

• [SLOW TEST:7.412 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3958,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:07:55.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 28 05:07:56.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d07acfdb-e7d1-44d0-a90f-c97d35027872" in namespace "downward-api-45" to be "success or failure"
Aug 28 05:07:56.408: INFO: Pod "downwardapi-volume-d07acfdb-e7d1-44d0-a90f-c97d35027872": Phase="Pending", Reason="", readiness=false. Elapsed: 14.965092ms
Aug 28 05:07:58.511: INFO: Pod "downwardapi-volume-d07acfdb-e7d1-44d0-a90f-c97d35027872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117931276s
Aug 28 05:08:00.519: INFO: Pod "downwardapi-volume-d07acfdb-e7d1-44d0-a90f-c97d35027872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125164723s
STEP: Saw pod success
Aug 28 05:08:00.519: INFO: Pod "downwardapi-volume-d07acfdb-e7d1-44d0-a90f-c97d35027872" satisfied condition "success or failure"
Aug 28 05:08:00.523: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d07acfdb-e7d1-44d0-a90f-c97d35027872 container client-container: 
STEP: delete the pod
Aug 28 05:08:00.587: INFO: Waiting for pod downwardapi-volume-d07acfdb-e7d1-44d0-a90f-c97d35027872 to disappear
Aug 28 05:08:00.660: INFO: Pod downwardapi-volume-d07acfdb-e7d1-44d0-a90f-c97d35027872 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:08:00.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-45" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3960,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:08:00.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:08:18.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3429" for this suite.

• [SLOW TEST:18.082 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":237,"skipped":3972,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:08:18.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2942
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 28 05:08:18.842: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 28 05:08:45.045: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.213:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2942 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 05:08:45.045: INFO: >>> kubeConfig: /root/.kube/config
I0828 05:08:45.113435       8 log.go:172] (0x4002cae4d0) (0x4000340b40) Create stream
I0828 05:08:45.113668       8 log.go:172] (0x4002cae4d0) (0x4000340b40) Stream added, broadcasting: 1
I0828 05:08:45.118132       8 log.go:172] (0x4002cae4d0) Reply frame received for 1
I0828 05:08:45.118389       8 log.go:172] (0x4002cae4d0) (0x4000341680) Create stream
I0828 05:08:45.118529       8 log.go:172] (0x4002cae4d0) (0x4000341680) Stream added, broadcasting: 3
I0828 05:08:45.120409       8 log.go:172] (0x4002cae4d0) Reply frame received for 3
I0828 05:08:45.120587       8 log.go:172] (0x4002cae4d0) (0x400161a000) Create stream
I0828 05:08:45.120823       8 log.go:172] (0x4002cae4d0) (0x400161a000) Stream added, broadcasting: 5
I0828 05:08:45.122517       8 log.go:172] (0x4002cae4d0) Reply frame received for 5
I0828 05:08:45.225593       8 log.go:172] (0x4002cae4d0) Data frame received for 3
I0828 05:08:45.225774       8 log.go:172] (0x4000341680) (3) Data frame handling
I0828 05:08:45.225924       8 log.go:172] (0x4002cae4d0) Data frame received for 5
I0828 05:08:45.226101       8 log.go:172] (0x400161a000) (5) Data frame handling
I0828 05:08:45.226209       8 log.go:172] (0x4000341680) (3) Data frame sent
I0828 05:08:45.226338       8 log.go:172] (0x4002cae4d0) Data frame received for 3
I0828 05:08:45.226410       8 log.go:172] (0x4000341680) (3) Data frame handling
I0828 05:08:45.226712       8 log.go:172] (0x4002cae4d0) Data frame received for 1
I0828 05:08:45.226788       8 log.go:172] (0x4000340b40) (1) Data frame handling
I0828 05:08:45.226870       8 log.go:172] (0x4000340b40) (1) Data frame sent
I0828 05:08:45.226964       8 log.go:172] (0x4002cae4d0) (0x4000340b40) Stream removed, broadcasting: 1
I0828 05:08:45.227072       8 log.go:172] (0x4002cae4d0) Go away received
I0828 05:08:45.227457       8 log.go:172] (0x4002cae4d0) (0x4000340b40) Stream removed, broadcasting: 1
I0828 05:08:45.227569       8 log.go:172] (0x4002cae4d0) (0x4000341680) Stream removed, broadcasting: 3
I0828 05:08:45.227663       8 log.go:172] (0x4002cae4d0) (0x400161a000) Stream removed, broadcasting: 5
Aug 28 05:08:45.227: INFO: Found all expected endpoints: [netserver-0]
Aug 28 05:08:45.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2942 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 05:08:45.233: INFO: >>> kubeConfig: /root/.kube/config
I0828 05:08:45.301533       8 log.go:172] (0x4002caea50) (0x40010588c0) Create stream
I0828 05:08:45.301725       8 log.go:172] (0x4002caea50) (0x40010588c0) Stream added, broadcasting: 1
I0828 05:08:45.306473       8 log.go:172] (0x4002caea50) Reply frame received for 1
I0828 05:08:45.306719       8 log.go:172] (0x4002caea50) (0x4001058d20) Create stream
I0828 05:08:45.306812       8 log.go:172] (0x4002caea50) (0x4001058d20) Stream added, broadcasting: 3
I0828 05:08:45.309375       8 log.go:172] (0x4002caea50) Reply frame received for 3
I0828 05:08:45.309596       8 log.go:172] (0x4002caea50) (0x4001059860) Create stream
I0828 05:08:45.309711       8 log.go:172] (0x4002caea50) (0x4001059860) Stream added, broadcasting: 5
I0828 05:08:45.311451       8 log.go:172] (0x4002caea50) Reply frame received for 5
I0828 05:08:45.387147       8 log.go:172] (0x4002caea50) Data frame received for 5
I0828 05:08:45.387319       8 log.go:172] (0x4001059860) (5) Data frame handling
I0828 05:08:45.387494       8 log.go:172] (0x4002caea50) Data frame received for 3
I0828 05:08:45.387655       8 log.go:172] (0x4001058d20) (3) Data frame handling
I0828 05:08:45.387811       8 log.go:172] (0x4001058d20) (3) Data frame sent
I0828 05:08:45.387899       8 log.go:172] (0x4002caea50) Data frame received for 3
I0828 05:08:45.387999       8 log.go:172] (0x4001058d20) (3) Data frame handling
I0828 05:08:45.388548       8 log.go:172] (0x4002caea50) Data frame received for 1
I0828 05:08:45.388637       8 log.go:172] (0x40010588c0) (1) Data frame handling
I0828 05:08:45.388816       8 log.go:172] (0x40010588c0) (1) Data frame sent
I0828 05:08:45.388952       8 log.go:172] (0x4002caea50) (0x40010588c0) Stream removed, broadcasting: 1
I0828 05:08:45.389078       8 log.go:172] (0x4002caea50) Go away received
I0828 05:08:45.389628       8 log.go:172] (0x4002caea50) (0x40010588c0) Stream removed, broadcasting: 1
I0828 05:08:45.389788       8 log.go:172] (0x4002caea50) (0x4001058d20) Stream removed, broadcasting: 3
I0828 05:08:45.389906       8 log.go:172] (0x4002caea50) (0x4001059860) Stream removed, broadcasting: 5
Aug 28 05:08:45.390: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:08:45.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2942" for this suite.

• [SLOW TEST:26.643 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3995,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:08:45.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 28 05:08:45.574: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 28 05:08:45.598: INFO: Waiting for terminating namespaces to be deleted...
Aug 28 05:08:45.603: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 28 05:08:45.621: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.621: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 05:08:45.621: INFO: netserver-0 from pod-network-test-2942 started at 2020-08-28 05:08:18 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.621: INFO: 	Container webserver ready: true, restart count 0
Aug 28 05:08:45.621: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.621: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 05:08:45.621: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.621: INFO: 	Container app ready: true, restart count 0
Aug 28 05:08:45.621: INFO: host-test-container-pod from pod-network-test-2942 started at 2020-08-28 05:08:39 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.621: INFO: 	Container agnhost ready: true, restart count 0
Aug 28 05:08:45.621: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 28 05:08:45.658: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.658: INFO: 	Container httpd ready: true, restart count 0
Aug 28 05:08:45.658: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.658: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 05:08:45.658: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.658: INFO: 	Container app ready: true, restart count 0
Aug 28 05:08:45.658: INFO: netserver-1 from pod-network-test-2942 started at 2020-08-28 05:08:18 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.659: INFO: 	Container webserver ready: true, restart count 0
Aug 28 05:08:45.659: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.659: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 05:08:45.659: INFO: test-container-pod from pod-network-test-2942 started at 2020-08-28 05:08:38 +0000 UTC (1 container statuses recorded)
Aug 28 05:08:45.659: INFO: 	Container webserver ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b201dcb8-5648-4b31-9e21-912bea43fe1d 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-b201dcb8-5648-4b31-9e21-912bea43fe1d off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b201dcb8-5648-4b31-9e21-912bea43fe1d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:09:04.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3450" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:18.653 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":239,"skipped":4054,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:09:04.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 05:09:06.183: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 05:09:08.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188146, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188146, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188146, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188146, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 05:09:10.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188146, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188146, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188146, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188146, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 05:09:13.285: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 28 05:09:13.321: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:09:13.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7711" for this suite.
STEP: Destroying namespace "webhook-7711-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.369 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":240,"skipped":4082,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:09:13.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:09:17.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-561" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":241,"skipped":4086,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:09:17.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:09:21.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7639" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":242,"skipped":4124,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:09:21.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 28 05:09:25.313: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 28 05:09:27.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188165, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188165, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188165, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188165, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 05:09:30.404: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 05:09:30.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:09:31.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4764" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:10.014 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":243,"skipped":4150,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:09:31.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 28 05:09:31.892: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:09:40.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9845" for this suite.

• [SLOW TEST:8.518 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":244,"skipped":4152,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:09:40.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1493
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1493
STEP: Creating statefulset with conflicting port in namespace statefulset-1493
STEP: Waiting until pod test-pod will start running in namespace statefulset-1493
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1493
Aug 28 05:09:44.503: INFO: Observed stateful pod in namespace: statefulset-1493, name: ss-0, uid: 6e73f95d-2423-4c7e-931f-00be6c3c9573, status phase: Pending. Waiting for statefulset controller to delete.
Aug 28 05:09:44.686: INFO: Observed stateful pod in namespace: statefulset-1493, name: ss-0, uid: 6e73f95d-2423-4c7e-931f-00be6c3c9573, status phase: Failed. Waiting for statefulset controller to delete.
Aug 28 05:09:44.732: INFO: Observed stateful pod in namespace: statefulset-1493, name: ss-0, uid: 6e73f95d-2423-4c7e-931f-00be6c3c9573, status phase: Failed. Waiting for statefulset controller to delete.
Aug 28 05:09:44.763: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1493
STEP: Removing pod with conflicting port in namespace statefulset-1493
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1493 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 28 05:09:51.422: INFO: Deleting all statefulset in ns statefulset-1493
Aug 28 05:09:51.458: INFO: Scaling statefulset ss to 0
Aug 28 05:10:01.751: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 05:10:01.754: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:10:01.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1493" for this suite.

• [SLOW TEST:21.441 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":245,"skipped":4152,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:10:01.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-5a429b6f-c665-453d-9899-a6ba5048c8ee
STEP: Creating configMap with name cm-test-opt-upd-75c39360-2a71-4c02-8689-4832f11fcec8
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5a429b6f-c665-453d-9899-a6ba5048c8ee
STEP: Updating configmap cm-test-opt-upd-75c39360-2a71-4c02-8689-4832f11fcec8
STEP: Creating configMap with name cm-test-opt-create-2c30386f-e2f1-4cb1-90d9-537f9907c587
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:10:10.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1252" for this suite.

• [SLOW TEST:8.397 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4156,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:10:10.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 05:10:12.122: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 05:10:14.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188212, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188212, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188212, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188212, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 05:10:16.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188212, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188212, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188212, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188212, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 05:10:19.375: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:10:19.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4820" for this suite.
STEP: Destroying namespace "webhook-4820-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.496 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":247,"skipped":4170,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:10:19.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-133d7c24-edf9-47af-9cee-6112506ca883
STEP: Creating a pod to test consume secrets
Aug 28 05:10:19.888: INFO: Waiting up to 5m0s for pod "pod-secrets-dd800774-5d5d-43a9-afb0-35f5086a93d1" in namespace "secrets-5250" to be "success or failure"
Aug 28 05:10:20.279: INFO: Pod "pod-secrets-dd800774-5d5d-43a9-afb0-35f5086a93d1": Phase="Pending", Reason="", readiness=false. Elapsed: 390.710941ms
Aug 28 05:10:22.285: INFO: Pod "pod-secrets-dd800774-5d5d-43a9-afb0-35f5086a93d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397378658s
Aug 28 05:10:24.293: INFO: Pod "pod-secrets-dd800774-5d5d-43a9-afb0-35f5086a93d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.405193305s
STEP: Saw pod success
Aug 28 05:10:24.293: INFO: Pod "pod-secrets-dd800774-5d5d-43a9-afb0-35f5086a93d1" satisfied condition "success or failure"
Aug 28 05:10:24.298: INFO: Trying to get logs from node jerma-worker pod pod-secrets-dd800774-5d5d-43a9-afb0-35f5086a93d1 container secret-volume-test: 
STEP: delete the pod
Aug 28 05:10:24.323: INFO: Waiting for pod pod-secrets-dd800774-5d5d-43a9-afb0-35f5086a93d1 to disappear
Aug 28 05:10:24.357: INFO: Pod pod-secrets-dd800774-5d5d-43a9-afb0-35f5086a93d1 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:10:24.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5250" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4194,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:10:24.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 05:10:26.677: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 05:10:28.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188226, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188226, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188226, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734188226, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 05:10:31.731: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:10:32.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4475" for this suite.
STEP: Destroying namespace "webhook-4475-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.019 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":249,"skipped":4209,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:10:32.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 28 05:10:32.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1948'
Aug 28 05:10:37.070: INFO: stderr: ""
Aug 28 05:10:37.070: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 28 05:10:37.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1948'
Aug 28 05:10:38.424: INFO: stderr: ""
Aug 28 05:10:38.425: INFO: stdout: "update-demo-nautilus-rl25m update-demo-nautilus-rvvr5 "
Aug 28 05:10:38.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rl25m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:10:39.728: INFO: stderr: ""
Aug 28 05:10:39.728: INFO: stdout: ""
Aug 28 05:10:39.728: INFO: update-demo-nautilus-rl25m is created but not running
Aug 28 05:10:44.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1948'
Aug 28 05:10:46.025: INFO: stderr: ""
Aug 28 05:10:46.026: INFO: stdout: "update-demo-nautilus-rl25m update-demo-nautilus-rvvr5 "
Aug 28 05:10:46.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rl25m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:10:47.282: INFO: stderr: ""
Aug 28 05:10:47.282: INFO: stdout: "true"
Aug 28 05:10:47.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rl25m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:10:48.569: INFO: stderr: ""
Aug 28 05:10:48.569: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 28 05:10:48.569: INFO: validating pod update-demo-nautilus-rl25m
Aug 28 05:10:48.610: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 28 05:10:48.610: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 28 05:10:48.610: INFO: update-demo-nautilus-rl25m is verified up and running
Aug 28 05:10:48.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvvr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:10:49.921: INFO: stderr: ""
Aug 28 05:10:49.921: INFO: stdout: "true"
Aug 28 05:10:49.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvvr5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:10:51.222: INFO: stderr: ""
Aug 28 05:10:51.222: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 28 05:10:51.222: INFO: validating pod update-demo-nautilus-rvvr5
Aug 28 05:10:51.227: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 28 05:10:51.227: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 28 05:10:51.227: INFO: update-demo-nautilus-rvvr5 is verified up and running
STEP: scaling down the replication controller
Aug 28 05:10:51.235: INFO: scanned /root for discovery docs: 
Aug 28 05:10:51.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1948'
Aug 28 05:10:53.576: INFO: stderr: ""
Aug 28 05:10:53.576: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 28 05:10:53.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1948'
Aug 28 05:10:54.868: INFO: stderr: ""
Aug 28 05:10:54.868: INFO: stdout: "update-demo-nautilus-rl25m update-demo-nautilus-rvvr5 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 28 05:10:59.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1948'
Aug 28 05:11:01.175: INFO: stderr: ""
Aug 28 05:11:01.175: INFO: stdout: "update-demo-nautilus-rl25m update-demo-nautilus-rvvr5 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 28 05:11:06.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1948'
Aug 28 05:11:07.448: INFO: stderr: ""
Aug 28 05:11:07.448: INFO: stdout: "update-demo-nautilus-rl25m "
Aug 28 05:11:07.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rl25m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:11:08.684: INFO: stderr: ""
Aug 28 05:11:08.684: INFO: stdout: "true"
Aug 28 05:11:08.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rl25m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:11:09.969: INFO: stderr: ""
Aug 28 05:11:09.969: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 28 05:11:09.969: INFO: validating pod update-demo-nautilus-rl25m
Aug 28 05:11:09.973: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 28 05:11:09.973: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 28 05:11:09.974: INFO: update-demo-nautilus-rl25m is verified up and running
STEP: scaling up the replication controller
Aug 28 05:11:09.981: INFO: scanned /root for discovery docs: 
Aug 28 05:11:09.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1948'
Aug 28 05:11:12.339: INFO: stderr: ""
Aug 28 05:11:12.340: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 28 05:11:12.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1948'
Aug 28 05:11:13.625: INFO: stderr: ""
Aug 28 05:11:13.625: INFO: stdout: "update-demo-nautilus-csr99 update-demo-nautilus-rl25m "
Aug 28 05:11:13.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-csr99 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:11:14.907: INFO: stderr: ""
Aug 28 05:11:14.907: INFO: stdout: "true"
Aug 28 05:11:14.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-csr99 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:11:16.200: INFO: stderr: ""
Aug 28 05:11:16.200: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 28 05:11:16.200: INFO: validating pod update-demo-nautilus-csr99
Aug 28 05:11:16.205: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 28 05:11:16.205: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 28 05:11:16.205: INFO: update-demo-nautilus-csr99 is verified up and running
Aug 28 05:11:16.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rl25m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:11:17.466: INFO: stderr: ""
Aug 28 05:11:17.466: INFO: stdout: "true"
Aug 28 05:11:17.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rl25m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1948'
Aug 28 05:11:18.731: INFO: stderr: ""
Aug 28 05:11:18.731: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 28 05:11:18.731: INFO: validating pod update-demo-nautilus-rl25m
Aug 28 05:11:18.736: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 28 05:11:18.736: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 28 05:11:18.736: INFO: update-demo-nautilus-rl25m is verified up and running
STEP: using delete to clean up resources
Aug 28 05:11:18.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1948'
Aug 28 05:11:19.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 05:11:19.995: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 28 05:11:19.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1948'
Aug 28 05:11:21.291: INFO: stderr: "No resources found in kubectl-1948 namespace.\n"
Aug 28 05:11:21.291: INFO: stdout: ""
Aug 28 05:11:21.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1948 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 28 05:11:22.623: INFO: stderr: ""
Aug 28 05:11:22.623: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:11:22.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1948" for this suite.

• [SLOW TEST:50.242 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":250,"skipped":4215,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:11:22.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:11:22.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4771" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":251,"skipped":4227,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:11:22.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-2d5fdcee-b054-40cc-9ac8-1518196d748e
Aug 28 05:11:22.890: INFO: Pod name my-hostname-basic-2d5fdcee-b054-40cc-9ac8-1518196d748e: Found 0 pods out of 1
Aug 28 05:11:27.897: INFO: Pod name my-hostname-basic-2d5fdcee-b054-40cc-9ac8-1518196d748e: Found 1 pods out of 1
Aug 28 05:11:27.897: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2d5fdcee-b054-40cc-9ac8-1518196d748e" are running
Aug 28 05:11:27.924: INFO: Pod "my-hostname-basic-2d5fdcee-b054-40cc-9ac8-1518196d748e-phnpw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 05:11:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 05:11:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 05:11:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 05:11:22 +0000 UTC Reason: Message:}])
Aug 28 05:11:27.925: INFO: Trying to dial the pod
Aug 28 05:11:32.943: INFO: Controller my-hostname-basic-2d5fdcee-b054-40cc-9ac8-1518196d748e: Got expected result from replica 1 [my-hostname-basic-2d5fdcee-b054-40cc-9ac8-1518196d748e-phnpw]: "my-hostname-basic-2d5fdcee-b054-40cc-9ac8-1518196d748e-phnpw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:11:32.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7882" for this suite.

• [SLOW TEST:10.188 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":252,"skipped":4228,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:11:32.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 28 05:11:33.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-633222c8-83ef-4704-bca6-3f04ef0f6dd1" in namespace "downward-api-5324" to be "success or failure"
Aug 28 05:11:33.084: INFO: Pod "downwardapi-volume-633222c8-83ef-4704-bca6-3f04ef0f6dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.444441ms
Aug 28 05:11:35.091: INFO: Pod "downwardapi-volume-633222c8-83ef-4704-bca6-3f04ef0f6dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014726775s
Aug 28 05:11:37.096: INFO: Pod "downwardapi-volume-633222c8-83ef-4704-bca6-3f04ef0f6dd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02045517s
STEP: Saw pod success
Aug 28 05:11:37.097: INFO: Pod "downwardapi-volume-633222c8-83ef-4704-bca6-3f04ef0f6dd1" satisfied condition "success or failure"
Aug 28 05:11:37.100: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-633222c8-83ef-4704-bca6-3f04ef0f6dd1 container client-container: 
STEP: delete the pod
Aug 28 05:11:37.160: INFO: Waiting for pod downwardapi-volume-633222c8-83ef-4704-bca6-3f04ef0f6dd1 to disappear
Aug 28 05:11:37.164: INFO: Pod downwardapi-volume-633222c8-83ef-4704-bca6-3f04ef0f6dd1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:11:37.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5324" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4234,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:11:37.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:12:09.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-956" for this suite.

• [SLOW TEST:32.323 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4258,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:12:09.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:12:13.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6063" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4283,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:12:13.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 28 05:12:13.820: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad726087-115e-48c0-9478-29b055db249e" in namespace "projected-9403" to be "success or failure"
Aug 28 05:12:13.850: INFO: Pod "downwardapi-volume-ad726087-115e-48c0-9478-29b055db249e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.858078ms
Aug 28 05:12:15.857: INFO: Pod "downwardapi-volume-ad726087-115e-48c0-9478-29b055db249e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036908534s
Aug 28 05:12:17.863: INFO: Pod "downwardapi-volume-ad726087-115e-48c0-9478-29b055db249e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042902405s
STEP: Saw pod success
Aug 28 05:12:17.863: INFO: Pod "downwardapi-volume-ad726087-115e-48c0-9478-29b055db249e" satisfied condition "success or failure"
Aug 28 05:12:17.867: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ad726087-115e-48c0-9478-29b055db249e container client-container: 
STEP: delete the pod
Aug 28 05:12:18.233: INFO: Waiting for pod downwardapi-volume-ad726087-115e-48c0-9478-29b055db249e to disappear
Aug 28 05:12:18.238: INFO: Pod downwardapi-volume-ad726087-115e-48c0-9478-29b055db249e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:12:18.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9403" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4290,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:12:18.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Aug 28 05:12:18.318: INFO: Waiting up to 5m0s for pod "client-containers-af15172e-74e4-44f5-a49e-e392dab3c821" in namespace "containers-1055" to be "success or failure"
Aug 28 05:12:18.357: INFO: Pod "client-containers-af15172e-74e4-44f5-a49e-e392dab3c821": Phase="Pending", Reason="", readiness=false. Elapsed: 39.078765ms
Aug 28 05:12:20.364: INFO: Pod "client-containers-af15172e-74e4-44f5-a49e-e392dab3c821": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045838949s
Aug 28 05:12:22.371: INFO: Pod "client-containers-af15172e-74e4-44f5-a49e-e392dab3c821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053199573s
STEP: Saw pod success
Aug 28 05:12:22.372: INFO: Pod "client-containers-af15172e-74e4-44f5-a49e-e392dab3c821" satisfied condition "success or failure"
Aug 28 05:12:22.377: INFO: Trying to get logs from node jerma-worker pod client-containers-af15172e-74e4-44f5-a49e-e392dab3c821 container test-container: 
STEP: delete the pod
Aug 28 05:12:22.433: INFO: Waiting for pod client-containers-af15172e-74e4-44f5-a49e-e392dab3c821 to disappear
Aug 28 05:12:22.459: INFO: Pod client-containers-af15172e-74e4-44f5-a49e-e392dab3c821 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:12:22.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1055" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4300,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:12:22.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 28 05:12:22.617: INFO: Waiting up to 5m0s for pod "pod-6c3a5d52-b12a-46c9-b802-48302c400157" in namespace "emptydir-3475" to be "success or failure"
Aug 28 05:12:22.645: INFO: Pod "pod-6c3a5d52-b12a-46c9-b802-48302c400157": Phase="Pending", Reason="", readiness=false. Elapsed: 27.772376ms
Aug 28 05:12:24.652: INFO: Pod "pod-6c3a5d52-b12a-46c9-b802-48302c400157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034999087s
Aug 28 05:12:26.659: INFO: Pod "pod-6c3a5d52-b12a-46c9-b802-48302c400157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041978038s
STEP: Saw pod success
Aug 28 05:12:26.659: INFO: Pod "pod-6c3a5d52-b12a-46c9-b802-48302c400157" satisfied condition "success or failure"
Aug 28 05:12:26.666: INFO: Trying to get logs from node jerma-worker2 pod pod-6c3a5d52-b12a-46c9-b802-48302c400157 container test-container: 
STEP: delete the pod
Aug 28 05:12:26.717: INFO: Waiting for pod pod-6c3a5d52-b12a-46c9-b802-48302c400157 to disappear
Aug 28 05:12:26.745: INFO: Pod pod-6c3a5d52-b12a-46c9-b802-48302c400157 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:12:26.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3475" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4303,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:12:26.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 28 05:12:26.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2609'
Aug 28 05:12:28.158: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 28 05:12:28.158: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Aug 28 05:12:28.190: INFO: scanned /root for discovery docs: 
Aug 28 05:12:28.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2609'
Aug 28 05:12:45.948: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 28 05:12:45.948: INFO: stdout: "Created e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24\nScaling up e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 28 05:12:45.949: INFO: stdout: "Created e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24\nScaling up e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 28 05:12:45.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2609'
Aug 28 05:12:47.248: INFO: stderr: ""
Aug 28 05:12:47.248: INFO: stdout: "e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24-sl52l "
Aug 28 05:12:47.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24-sl52l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2609'
Aug 28 05:12:48.533: INFO: stderr: ""
Aug 28 05:12:48.534: INFO: stdout: "true"
Aug 28 05:12:48.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24-sl52l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2609'
Aug 28 05:12:49.830: INFO: stderr: ""
Aug 28 05:12:49.830: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 28 05:12:49.830: INFO: e2e-test-httpd-rc-805e90286c9802199d61c3473c310f24-sl52l is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 28 05:12:49.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2609'
Aug 28 05:12:51.102: INFO: stderr: ""
Aug 28 05:12:51.102: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:12:51.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2609" for this suite.

• [SLOW TEST:24.359 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":259,"skipped":4313,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:12:51.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 28 05:12:51.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:14:35.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1718" for this suite.

• [SLOW TEST:104.784 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":260,"skipped":4349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:14:35.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:14:36.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8311" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":261,"skipped":4373,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:14:36.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 28 05:14:36.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-680c851b-7daf-43fa-b3ec-bbeccfab7f2f" in namespace "downward-api-9394" to be "success or failure"
Aug 28 05:14:36.273: INFO: Pod "downwardapi-volume-680c851b-7daf-43fa-b3ec-bbeccfab7f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.60135ms
Aug 28 05:14:38.280: INFO: Pod "downwardapi-volume-680c851b-7daf-43fa-b3ec-bbeccfab7f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053959994s
Aug 28 05:14:40.303: INFO: Pod "downwardapi-volume-680c851b-7daf-43fa-b3ec-bbeccfab7f2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076818201s
STEP: Saw pod success
Aug 28 05:14:40.303: INFO: Pod "downwardapi-volume-680c851b-7daf-43fa-b3ec-bbeccfab7f2f" satisfied condition "success or failure"
Aug 28 05:14:40.306: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-680c851b-7daf-43fa-b3ec-bbeccfab7f2f container client-container: 
STEP: delete the pod
Aug 28 05:14:40.355: INFO: Waiting for pod downwardapi-volume-680c851b-7daf-43fa-b3ec-bbeccfab7f2f to disappear
Aug 28 05:14:40.371: INFO: Pod downwardapi-volume-680c851b-7daf-43fa-b3ec-bbeccfab7f2f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:14:40.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9394" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4375,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:14:40.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-613efa60-41fa-4884-86fb-545561e47fa8
STEP: Creating a pod to test consume configMaps
Aug 28 05:14:40.872: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951" in namespace "projected-1439" to be "success or failure"
Aug 28 05:14:40.893: INFO: Pod "pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951": Phase="Pending", Reason="", readiness=false. Elapsed: 20.938275ms
Aug 28 05:14:42.921: INFO: Pod "pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048601197s
Aug 28 05:14:45.208: INFO: Pod "pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335981851s
Aug 28 05:14:47.398: INFO: Pod "pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.526272938s
STEP: Saw pod success
Aug 28 05:14:47.399: INFO: Pod "pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951" satisfied condition "success or failure"
Aug 28 05:14:47.405: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 28 05:14:48.603: INFO: Waiting for pod pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951 to disappear
Aug 28 05:14:49.056: INFO: Pod pod-projected-configmaps-c865420b-af54-4bb1-950d-50dd3ebb4951 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:14:49.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1439" for this suite.

• [SLOW TEST:9.031 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4378,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:14:49.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 28 05:14:51.793: INFO: Waiting up to 5m0s for pod "pod-12752acc-649c-4c43-b8bc-d3b76fa4c833" in namespace "emptydir-4548" to be "success or failure"
Aug 28 05:14:52.230: INFO: Pod "pod-12752acc-649c-4c43-b8bc-d3b76fa4c833": Phase="Pending", Reason="", readiness=false. Elapsed: 436.923032ms
Aug 28 05:14:54.236: INFO: Pod "pod-12752acc-649c-4c43-b8bc-d3b76fa4c833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.44288216s
Aug 28 05:14:56.324: INFO: Pod "pod-12752acc-649c-4c43-b8bc-d3b76fa4c833": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531064072s
Aug 28 05:14:58.532: INFO: Pod "pod-12752acc-649c-4c43-b8bc-d3b76fa4c833": Phase="Pending", Reason="", readiness=false. Elapsed: 6.73913365s
Aug 28 05:15:00.539: INFO: Pod "pod-12752acc-649c-4c43-b8bc-d3b76fa4c833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.746375292s
STEP: Saw pod success
Aug 28 05:15:00.540: INFO: Pod "pod-12752acc-649c-4c43-b8bc-d3b76fa4c833" satisfied condition "success or failure"
Aug 28 05:15:00.545: INFO: Trying to get logs from node jerma-worker pod pod-12752acc-649c-4c43-b8bc-d3b76fa4c833 container test-container: 
STEP: delete the pod
Aug 28 05:15:00.595: INFO: Waiting for pod pod-12752acc-649c-4c43-b8bc-d3b76fa4c833 to disappear
Aug 28 05:15:00.601: INFO: Pod pod-12752acc-649c-4c43-b8bc-d3b76fa4c833 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:15:00.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4548" for this suite.

• [SLOW TEST:11.199 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4402,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:15:00.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-d9a3b4c5-05a3-4291-9ede-89ae7c4f4a46
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:15:00.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2447" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":265,"skipped":4408,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:15:00.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 28 05:15:00.908: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-708 /api/v1/namespaces/watch-708/configmaps/e2e-watch-test-watch-closed d07e8af5-ec8d-4854-b0fa-41faedc78e15 4502019 0 2020-08-28 05:15:00 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 28 05:15:00.909: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-708 /api/v1/namespaces/watch-708/configmaps/e2e-watch-test-watch-closed d07e8af5-ec8d-4854-b0fa-41faedc78e15 4502020 0 2020-08-28 05:15:00 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 28 05:15:01.007: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-708 /api/v1/namespaces/watch-708/configmaps/e2e-watch-test-watch-closed d07e8af5-ec8d-4854-b0fa-41faedc78e15 4502021 0 2020-08-28 05:15:00 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 28 05:15:01.008: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-708 /api/v1/namespaces/watch-708/configmaps/e2e-watch-test-watch-closed d07e8af5-ec8d-4854-b0fa-41faedc78e15 4502022 0 2020-08-28 05:15:00 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:15:01.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-708" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":266,"skipped":4415,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:15:01.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 28 05:15:05.168: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 28 05:15:16.384: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:15:16.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4668" for this suite.

• [SLOW TEST:15.352 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":267,"skipped":4418,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:15:16.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 28 05:15:23.042: INFO: Successfully updated pod "adopt-release-mhpj5"
STEP: Checking that the Job readopts the Pod
Aug 28 05:15:23.043: INFO: Waiting up to 15m0s for pod "adopt-release-mhpj5" in namespace "job-5575" to be "adopted"
Aug 28 05:15:23.063: INFO: Pod "adopt-release-mhpj5": Phase="Running", Reason="", readiness=true. Elapsed: 20.289318ms
Aug 28 05:15:25.071: INFO: Pod "adopt-release-mhpj5": Phase="Running", Reason="", readiness=true. Elapsed: 2.027922022s
Aug 28 05:15:25.071: INFO: Pod "adopt-release-mhpj5" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 28 05:15:25.587: INFO: Successfully updated pod "adopt-release-mhpj5"
STEP: Checking that the Job releases the Pod
Aug 28 05:15:25.587: INFO: Waiting up to 15m0s for pod "adopt-release-mhpj5" in namespace "job-5575" to be "released"
Aug 28 05:15:25.640: INFO: Pod "adopt-release-mhpj5": Phase="Running", Reason="", readiness=true. Elapsed: 53.19837ms
Aug 28 05:15:25.641: INFO: Pod "adopt-release-mhpj5" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:15:25.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5575" for this suite.

• [SLOW TEST:9.360 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":268,"skipped":4431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:15:25.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 28 05:15:25.857: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 28 05:15:28.707: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:15:28.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8725" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":269,"skipped":4472,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:15:28.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 28 05:15:36.178: INFO: Successfully updated pod "pod-update-14d927ce-5954-4204-8ccc-9891186caf6b"
STEP: verifying the updated pod is in kubernetes
Aug 28 05:15:36.193: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:15:36.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6461" for this suite.

• [SLOW TEST:7.219 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4476,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:15:36.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 28 05:15:36.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 28 05:16:52.839: INFO: >>> kubeConfig: /root/.kube/config
Aug 28 05:17:03.035: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:18:20.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6061" for this suite.

• [SLOW TEST:163.983 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":271,"skipped":4484,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:18:20.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-ftzp
STEP: Creating a pod to test atomic-volume-subpath
Aug 28 05:18:20.332: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ftzp" in namespace "subpath-6304" to be "success or failure"
Aug 28 05:18:20.350: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Pending", Reason="", readiness=false. Elapsed: 17.887015ms
Aug 28 05:18:22.357: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024696296s
Aug 28 05:18:24.384: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 4.051863217s
Aug 28 05:18:26.390: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 6.058568864s
Aug 28 05:18:28.399: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 8.067536583s
Aug 28 05:18:30.408: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 10.075891201s
Aug 28 05:18:32.415: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 12.082779421s
Aug 28 05:18:34.421: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 14.088690206s
Aug 28 05:18:36.426: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 16.094430457s
Aug 28 05:18:38.434: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 18.102108181s
Aug 28 05:18:40.441: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 20.109497353s
Aug 28 05:18:42.449: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Running", Reason="", readiness=true. Elapsed: 22.116660874s
Aug 28 05:18:44.467: INFO: Pod "pod-subpath-test-configmap-ftzp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.135609174s
STEP: Saw pod success
Aug 28 05:18:44.468: INFO: Pod "pod-subpath-test-configmap-ftzp" satisfied condition "success or failure"
Aug 28 05:18:44.471: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-ftzp container test-container-subpath-configmap-ftzp: 
STEP: delete the pod
Aug 28 05:18:44.504: INFO: Waiting for pod pod-subpath-test-configmap-ftzp to disappear
Aug 28 05:18:44.562: INFO: Pod pod-subpath-test-configmap-ftzp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ftzp
Aug 28 05:18:44.562: INFO: Deleting pod "pod-subpath-test-configmap-ftzp" in namespace "subpath-6304"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:18:44.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6304" for this suite.

• [SLOW TEST:24.384 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":272,"skipped":4495,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:18:44.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-cd7688df-a225-4462-9aaf-b56f63899f2d in namespace container-probe-6418
Aug 28 05:18:51.043: INFO: Started pod test-webserver-cd7688df-a225-4462-9aaf-b56f63899f2d in namespace container-probe-6418
STEP: checking the pod's current state and verifying that restartCount is present
Aug 28 05:18:51.046: INFO: Initial restart count of pod test-webserver-cd7688df-a225-4462-9aaf-b56f63899f2d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:22:52.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6418" for this suite.

• [SLOW TEST:248.336 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4505,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:22:52.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Aug 28 05:22:53.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 28 05:22:54.747: INFO: stderr: ""
Aug 28 05:22:54.747: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:22:54.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5894" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":274,"skipped":4517,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:22:54.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-616e2c44-e7ea-4e7f-a7c6-2888c1a79039
STEP: Creating a pod to test consume configMaps
Aug 28 05:22:54.980: INFO: Waiting up to 5m0s for pod "pod-configmaps-12327e44-689e-4e15-a15d-10c01a94f60b" in namespace "configmap-1957" to be "success or failure"
Aug 28 05:22:55.004: INFO: Pod "pod-configmaps-12327e44-689e-4e15-a15d-10c01a94f60b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.666528ms
Aug 28 05:22:57.012: INFO: Pod "pod-configmaps-12327e44-689e-4e15-a15d-10c01a94f60b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031030186s
Aug 28 05:22:59.019: INFO: Pod "pod-configmaps-12327e44-689e-4e15-a15d-10c01a94f60b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038616967s
STEP: Saw pod success
Aug 28 05:22:59.020: INFO: Pod "pod-configmaps-12327e44-689e-4e15-a15d-10c01a94f60b" satisfied condition "success or failure"
Aug 28 05:22:59.025: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-12327e44-689e-4e15-a15d-10c01a94f60b container configmap-volume-test: 
STEP: delete the pod
Aug 28 05:22:59.166: INFO: Waiting for pod pod-configmaps-12327e44-689e-4e15-a15d-10c01a94f60b to disappear
Aug 28 05:22:59.221: INFO: Pod pod-configmaps-12327e44-689e-4e15-a15d-10c01a94f60b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:22:59.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1957" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4535,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:22:59.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 28 05:22:59.510: INFO: >>> kubeConfig: /root/.kube/config
Aug 28 05:23:17.892: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:24:25.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4543" for this suite.

• [SLOW TEST:86.087 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":276,"skipped":4536,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:24:25.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-d706f7dd-e36f-4dc1-9c58-aab472bf56b0
STEP: Creating a pod to test consume secrets
Aug 28 05:24:25.472: INFO: Waiting up to 5m0s for pod "pod-secrets-658480d5-fae2-4085-83e8-f0e6ba2c75e8" in namespace "secrets-717" to be "success or failure"
Aug 28 05:24:25.524: INFO: Pod "pod-secrets-658480d5-fae2-4085-83e8-f0e6ba2c75e8": Phase="Pending", Reason="", readiness=false. Elapsed: 52.225782ms
Aug 28 05:24:27.572: INFO: Pod "pod-secrets-658480d5-fae2-4085-83e8-f0e6ba2c75e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100427356s
Aug 28 05:24:29.579: INFO: Pod "pod-secrets-658480d5-fae2-4085-83e8-f0e6ba2c75e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10699249s
STEP: Saw pod success
Aug 28 05:24:29.579: INFO: Pod "pod-secrets-658480d5-fae2-4085-83e8-f0e6ba2c75e8" satisfied condition "success or failure"
Aug 28 05:24:29.583: INFO: Trying to get logs from node jerma-worker pod pod-secrets-658480d5-fae2-4085-83e8-f0e6ba2c75e8 container secret-volume-test: 
STEP: delete the pod
Aug 28 05:24:29.670: INFO: Waiting for pod pod-secrets-658480d5-fae2-4085-83e8-f0e6ba2c75e8 to disappear
Aug 28 05:24:29.676: INFO: Pod pod-secrets-658480d5-fae2-4085-83e8-f0e6ba2c75e8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:24:29.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-717" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4547,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 28 05:24:29.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 28 05:24:36.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1435" for this suite.
STEP: Destroying namespace "nsdeletetest-2655" for this suite.
Aug 28 05:24:36.132: INFO: Namespace nsdeletetest-2655 was already deleted
STEP: Destroying namespace "nsdeletetest-8003" for this suite.

• [SLOW TEST:6.447 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":278,"skipped":4565,"failed":0}
SAug 28 05:24:36.138: INFO: Running AfterSuite actions on all nodes
Aug 28 05:24:36.139: INFO: Running AfterSuite actions on node 1
Aug 28 05:24:36.139: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 5835.844 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS